-
Notifications
You must be signed in to change notification settings - Fork 186
High memory consumption #32
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
Hi, |
+1 I'm seeing high memory consumption too. For further info please see Case ID 1557344221 |
Any updates on this? |
Sorry, but what do you mean by Case ID 1557344221? |
Also FWIW, it's the polling process that's taking up all the memory. The "master" process is sitting at 15MB which is very reasonable. |
I'm seeing memory continue to grow with each deploy and not be freed. We are currently over 1GB!
|
+1. I am seeing the polling process taking 1.138g after two days of it being up. One possible compounding issue is that we have two applications that depend on codedeploy for deployment on this given instance, but it still seems excessive. |
For people still experiencing this issue: In our case, this seems to have been resolved for our use case following an update of the ruby version. The ruby versions in question were 2.0.0 p353 (bad memory consumption) and 2.0.0 p598 (much more reasonable memory consumption). While there was also a simultaneous OS update, I believe that this would not have influenced this issue. |
I'm on Ruby 2.3 and this is still an issue. |
+1. We are on Ruby 2.3.1 and experiencing the same behavior. |
Our bundle size is ~80MB -- mem usage prior to a deployment is 28588kb RSS and jumps to 674184kb post deployment, after ~2-3 runs through our pipeline we get OOM failures on the deployment.
|
I'm also encountering this issue. Been running code deploy for a few months and we're seeing memory creeping on every deploy. As mentioned on #6, I added a few commands when the code deploy succeeds
There is nothing other than the CodeDeploy agent running on the Ruby process.
Digging deeper, AWS CodeDeploy must be holding the bundle within memory and not releasing it after the hook. There probably needs to be a modification somewhere in the code base to deallocate it. Aggressively changing the ruby GC is another option, but may not be an easy option for those with Ruby applications. |
Seeing this as well. Memory usage continually creeps up. After a dozen or so deploys, the agent is holding onto ~500MB of memory. Eventually, deployments start failing because the deployment scripts can no longer allocate memory. This seems like a pretty egregious problem to have open for so long. (See issue #6 as well.) |
So here's my hacky workaround to the aws-codedeploy-agent memory leak issue. I added a line to schedule a restart of the agent to the end of the last script (e.g. a script that runs from the ValidateService hook in the appspec.yml file):
Without something like this, builds start failing for me with 'Cannot Allocate Memory' errors in the logs after about 2-3 builds. Just to be clear, this isn't a "fix" that resolves the issue, so please don't close it. It's just a workaround hack. The real fix might be to ensure that the codedeploy agent is not reading files into memory with every build. |
I've taken memory dumps with memdump before and after deployments to help with debugging this. I monitored the agent with top during deployments and before the first deployment it took about 45 MiB memory, after the first about 80 MiB of memory and after the second about 100 MiB of memory. See attached dumps below: All of these are very high, I think 45 MiB of memory usage for the agent while idle is too high, the 100 MiB after two deployments is unacceptable - we're running this on t2.nano instances and having the codedeploy agent use more than 20% of available memory means we're seriously looking for alternatives right now to avoid having to upgrade all our servers to t2.micro and doubling the cost. Dumps were taken by installing rbtrace and memdump and then applying the following patch to /opt/codedeploy-agent/lib/instance_agent/agent/base.rb:
This will produce a memory dump every minute and after every step during deployment. |
I just tried replacing the memory dump with Basic stats of the dumps (with memdump) shows that there does seem to be significant leakage in the agent:
|
@Raniz85 In my experience, the memory consumption seems to be proportional to the size of the build asset files. Maybe the files are being read into memory and that memory is never released? |
Thanks for the research everyone! For those still running into this issue, can you include the specific version (ruby -v) of the Ruby run-time you are using with the CodeDeploy agent? |
@Jmcfar 2.3, 2.3.1, 2.4, 2.4.1 Built from source on Ubuntu 16.04 |
@Jmcfar: Running on ubuntu trusty (14.04.5) |
|
@Jmcfar : Can you give us an update on progress please? |
@Jmcfar : Can we get an update on this issue? |
Running into this on Ubuntu 16.04.
|
Seems that AWS Code Deploy is not quite ready for prime time. Does the aws-codedeploy-agent project even have an active maintainer at AWS? I see open pull requests that have been hanging around for a month with no comment. |
@jcowley AWS ain't all it's cracked up to be |
2 years and this is still and issue? We have hit the same issue - and seemingly even Enterprise Level Support from AWS will not push them to a resolution on this matter. |
Hi, is there an update on this issue? |
This ticket is open for five years now. Do yourself a favor and choose for another product. |
Let's give a standing ovation to AWS technical team on this issue. I am having strong vibes that it's AWS tactic to enforce customers to use high memory instances so they can make extra. CodeDeploy consuming 50% memory on t2.small instance whereas t2.micro is enough for our requirements. |
It's been 6 months ...no update... @annamataws ??? |
Well, at this point, I lost all interest in this feature. Such a waste. |
1. Going throw normal deploy process until build 2. During build stage downloading and creating python packages, then these python packages are kept and used once on the server to decrease down time 3a. Attempted to install STANZA models during build process, however due to overall size and AWS EC2 containers maintaining 5 versions of previous builds +last successful one, this lead to the container getting full causing errors, as a result stanza models are downloaded as part of the startup script 3b. To try and combat this, a remove-archives.sh file is used, I still got the issue with this file when downloading stanza so I kept but also pushed the stanza process to Deploy Step 4. Prior to starting download, a swap is created on the instance in the case of a influx of memory usage to try and maintain container health (see create-swap.sh) 5. After this, stanza is downloaded and the service is brought up 6. As part of the Validation Service a restart-codedeploy-agent.sh file is being used which backgrounds a process to restart code deploy after the entire deployment is complete. This is due to a memory leak in AWS code deploy listeners that increaes memory usage each deployment (see: aws/aws-codedeploy-agent#32) A Auto Scalling Group is used to handle instance restarts/turnover as memory issue can cause health of services to go down Initial test on lower branch
Please fix this issue ASAP, if I were aware of this problem I will never go for AWS Pipeline, unfortunately, I already set-up all my CI/CD process and this agent keep getting unresponsive and causing a lot of headaches. If no fix soon, I will need to migrate to a more robust service like Jenkins |
Just as a suggestion, but what I did to solve this is put a job in the queue in the end of my deployment that restarts the code deploy agent, so the memory usage is always "low". I wish I didn't have to do that, but it solves the problem. At least mine. |
Please make sure you set up SSM Distributor integration with CodeDeploy if you want to get a new version of CodeDeploy Agent. Otherwise you will not get the latest version of Agent with Wndows fix (actually, you will get it in some rear cases, but in general you will need to update it manually without Distributor). We are releasing a new version of the Agent, so please onboard with Distributor if you want to get one. |
@Helen1987 when you say "with Windows fix" do you mean it addresses codedeploy memory usage on Windows OS ? If so I'd be surprised if that addresses 1% of the problem as I'm sure most people deploy on Linux. |
In my case, a download of a ~50 MB ZIP bundle to an EC2 instance now takes several minutes with the CodeDeploy agent and risks that a small EC2 instance runs out of memory, while the same download on the same machine with aws-cli takes far less than 1 second. My workaround now is to change the CodeBuild process of all flows, so that the main ZIP bundle no longer contains the actual application, but only the The entire deployment now went down from around 4 to 6 minutes to ~6 seconds, including stopping the app, download, installation, starting and verification. I hope this may help some users to find a solution that works for them. |
I went down the same route Manc, but used an EFS drive to hold the build. |
@paulca99 sorry, did not realize it is “high memory” thread. Correction: new release includes high memory fix as well |
@Helen1987 Just to clarify, this github issue is resolved then correct? So this issue can be closed? |
This issue is fixed in our latest release v1.1.2 |
Doesn't seem to fix anything related with this, still, each deployment adds additional memory usage on codedeploy-agent process and it seems it never releases it. Version used: agent_version: OFFICIAL_1.1.2-1855_deb |
@spaivaras Does your system has |
@fleaz yes it does
Also tried both gzip and zip as artifacts. |
2nd what @spaivaras mentioned. Just updated to the latest version and it still using more RAM after every deploy. Unzip is also installed |
Hi, I am experiencing the same issue too. And it seems like the memory leak gets even worse after the update to the latest version. Relevant packages |
problem still present. |
@AnandarajuCS @feverLu @amartyag or any other contributors, is there any chance we could get this one re-opened? I've been seeing this intermittently for some time, I've just upgraded from 1.0.1 to the current build of 1.2.1 today, and the results (before and after, with a manual restart, just to be sure) are the same; codedeploy-agent / ruby eats around 150MB at a time and doesn't want to release it. Setup:
I'm getting nothing in the logs other than the usual polling. This particular instance is a fairly simple Wordpress t2.nano staging server. With only internal traffic, there's no issues with resource, except when CodeDeploy is ran. Happy to troubleshoot if necessary. |
This specific defect was resolved and I'm going to lock this thread to prevent it from being errantly re-opened for any new issues that exhibit similar problems. To give some more insight into the resolution, the leaks in this issue were being caused by the 3rd-party in-memory library being used to unpack deployment bundles. This library had some memory optimization deficiencies fundamental to its design and our solution was to replace it using the system However, most system If you've confirmed the system @mike-mccormick if you identify that something else is happening here and you'd like to help us troubleshoot any additional memory leaks you're running into, let's open up a new issue and provide all the relevant details there rather than conflate any new problems with this old and already resolved thread. When you open a new issue, please remember to do the following first:
|
Hello. When I restart codedeploy-agent it takes about 26MB. But, after one deploy it takes 300-350MB. Is it ok or not? Memory doesn't free when deploy is finished. Therefore, I get error about memory allocation on the next build.
The text was updated successfully, but these errors were encountered: