-
Notifications
You must be signed in to change notification settings - Fork 59
Behemoths periodically run out of inodes #1633
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
Hmm — I remember talking with @cunei once about the possibility of shallow-cloning. I'll see if I can dig that conversation up. |
Alternatively we can create a new EBS volume with a new file system where we explicitly specify the number of inodes on creation (https://askubuntu.com/questions/600159/how-can-i-create-an-ext4-partition-with-an-extra-large-number-of-inodes), then copy over the files. |
at present, on each behemoth I need to blow away the community build directories under workspace every 2 months, something like that. it's very little burden |
closing, as I think the status quo here is okay. we've recently added JDK 20 which increases the pressure somewhat, so we'll see, but in the meantime, I don't think this needs to stay open. |
did it successfully on behemoth-3.
|
Spinning off the discussion from scala/scala-dev#732 (comment) into a new ticket
Indeed it looks like inodes is more likely the issue than actual disk space. On behemoth-1:
while disk space looks fine
The community build workspaces have huge numbers of files and directories. For example, for "scala-2.13.x-jdk11-integrate-community-build" there are currently 103 extraction directories
A single one of those has > 200k inodes:
Looking at things a bit, it seems we could save > 40% of inodes by not pulling in all the git refs to pull requests. They look similar to this:
Some directory counting
Looking at files in the extraction, again a large number of git refs corresponding to pull requests
@SethTisue do you think we can do something about these git refs to pull requests?
The text was updated successfully, but these errors were encountered: