Skip to content

1.4.0 release #332

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 17 commits into from
Sep 16, 2022
Merged

1.4.0 release #332

merged 17 commits into from
Sep 16, 2022

Conversation

mwjones-aws
Copy link
Contributor

Issue #, if available:
#330

Description of changes:
https://docs.aws.amazon.com/codedeploy/latest/userguide/codedeploy-agent.html#codedeploy-agent-version-history

By submitting this pull request, I confirm that my contribution is made under the terms of the Apache 2.0 license.

chrisdibble and others added 17 commits September 15, 2022 11:57
…the behavior when launching the process within each deployment installation. This configuration is optional within the appspec file. Added integration and unit tests for the application_specification and installer.
http://infozip.sourceforge.net/FAQ.html#error-codes

During DownloadBundle, if unzip exit code is 50, that means disk
is/was full. If the child thread attempts to use Ruby's unzip to
extract again, the OS will signal to kill the child process.

When master process spawns a new child, the new child does not
pick up the DownloadBundle event, then the deployment is stuck
due to no response from agent.

This commit detects the unzip's exit code 50 of full disk, removes
partially extracted files, and raise an exception to post
failure to CodeDeploy server. The error message will be visible
as a lifecycle event error message, and the host-level deployment
will stop without being stuck/timedout.
This change removes the bin/update
script. It is no longer a supported
way to update CodeDeploy Agent.
…m aws-sdk-core"

This reverts commit b26ad112441f5db5e70b1c8126b5b91af0be4b8f.

Prior to this change, the agent version 1.4.0 was showing regression for ruby version 2.0
To fix this regression we have to build in a previous commit of aws-sdk-core.

This change reverts the change to the error message in the test.
This patch implements a simple noop check
for the `HookExecutor`. The logic is
essentially copied from `execute` a
few lines below, so the code
(and associated tests) are quite trivial.

I did have to make a decision around
handling scripts that do not exist -
this gets a bit into the weeds of how
we define a noop command. `execute`
will not run a script if it cannot be
found at the specified path, which
means that any command for which all
scripts do not exist are technically
noops. However, I think that customers
would rather not have us hit
`PutHostCommandComplete` in the rare
case that we receive a `Failed` for
a lifecycle event that has scripts
specified but none of them are valid
paths.
This patch adds a new function onto the
`CommandExecutor` which checks if all of
the command's lifecycle events are noops.
At runtime, the `CommandExecutor` uses
metaprogramming to set up methods for each
command that create a new `HookExecutor`
for each of the command's lifecycle events.

The agent appears to have support for
one-to-many mappings of commands to
lifecycle events, but we appear to only use
"identity mappings".  However, just to
be safe, I follow a similar pattern as the
`map` function does that handles this
one-to-many case.
This patch changes `CommandPoller::acknowledge_command` such that
we call `PutHostCommandComplete` for noop lifecycle events when
`PutHostCommandAcknowledgement` returns `Failed` to mitigate
the customer impact caused by lifecycle event timeouts.
We want to be able to verify that agents are
successfully completing noop commands once 1.4
gets released. This patch adds diagnostic
information to the `PutHostCommandComplete`
endpoint.
…knowledgement`

This patch allows us to pass diagnostic information on whether a command is a noop
or not from the Agent to CodeDeploy commands service, which CodeDeploy service
team will track.
…ng version file from S3

Why is this change needed?
---------
Prior to this change, install script sometimes fails when S3 connection is timedout while
downloading version file. This happens when there is a new region build. But when the s3 download
is retried it works fine. so added retry logic to not fail in the first attempt.

How does it address the issue?
---------
This change added retry logic to not fail in the first attempt.
This patch exposes the bucket name, key, version,
and etag of the bundle if we are running an s3 deployment.
This patch exposes the commit hash as `BUNDLE_COMMIT`
when we are deploying from Github.

GHI #36
Copy link
Contributor

@philstrong philstrong left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

cross referenced w/ release notes and looks good.

@mwjones-aws mwjones-aws merged commit 1a53e8f into master Sep 16, 2022
@mwjones-aws mwjones-aws deleted the 1.4.0-release branch September 16, 2022 17:12
@t0shiii t0shiii mentioned this pull request Sep 20, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

8 participants