-
Notifications
You must be signed in to change notification settings - Fork 207
Test that the size of a root drive is capped #298
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
runtime/limits_integ_test.go
Outdated
/dev/vdb 975.9M 8.4M 900.3M 1% / | ||
`, result.stdout, "stdout") | ||
|
||
// TODO: Doesn't it have only 900MB? The written records are higher than expected. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Where did the 975.9M disk come from? It doesn't look like that's defined as part of the test, but rather an implementation detail of the naive snapshotter. Is that correct?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes. Coming from
sparseImageSizeMB = 1024 |
Let me check whether we have a way to have an explicit limit or not.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes. With devmapper, the size is much much smaller.
--- FAIL: TestDiskLimit_Isolated (6.98s)
limits_integ_test.go:118:
Error Trace: limits_integ_test.go:118
Error: Not equal:
expected: "Filesystem Size Used Available Use% Mounted on\n/dev/vdb 975.9M 8.4M 900.3M 1% /\n"
actual : "Filesystem Size Used Available Use% Mounted on\n/dev/vdb 120.0M 7.0M 104.0M 6% /\n"
Diff:
--- Expected
+++ Actual
@@ -1,3 +1,3 @@
Filesystem Size Used Available Use% Mounted on
-/dev/vdb 975.9M 8.4M 900.3M 1% /
+/dev/vdb 120.0M 7.0M 104.0M 6% /
Test: TestDiskLimit_Isolated
Messages: stdout
limits_integ_test.go:123:
Error Trace: limits_integ_test.go:123
Error: Not equal:
expected: "952+0 records in\n951+0 records out\n"
actual : "111+0 records in\n110+0 records out\n"
Diff:
--- Expected
+++ Actual
@@ -1,3 +1,3 @@
-952+0 records in
-951+0 records out
+111+0 records in
+110+0 records out
Test: TestDiskLimit_Isolated
Messages: stderr
FAIL
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Because I'm specifying
base_image_size = "128MB" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Commented about where the size is coming from. We can make naive snapshotter more configurable, but the snapshotter is not for production. So, not so sure the value it brings vs. complexity.
runtime/limits_integ_test.go
Outdated
|
||
assert.Equal(uint32(1), result.exitCode) | ||
assert.Equal("", result.stdout, "stdout") | ||
assert.Equal("sh: out of memory\n", result.stderr, "stderr") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't see a memory limit specified anywhere. Where are you getting a limit from?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This limit is coming from Firecracker's default memory limit. I would have an explicit Create VM Request.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Putting it in the CreateVM request would help a lot. I think another useful thing would be to assert on the amount of memory the firecracker
process is using (as a proxy for the amount of memory used by the VM). There would need to be some amount of buffer for the overhead of Firecracker itself, but that way we'd be testing the VMM and our configuration of it rather than the behavior of the Linux kernel inside the VM.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I agree with Sam but would go further and say that all firecracker-containerd code does is translate CreateVM API parameters into Firecracker API calls, so I feel like an e2e test of just our code would make a CreateVM call with a given memory limit and then open a client directly with the VMM and verify it reports it was actually configured with the expected memory limit. Verifying the memory limit was passed through as expected is what we actually care about testing, not that the Firecracker implementation is also honoring that memory limit (presumably they already have tests for that in their own repo).
Now, if it turns out that for some reason it's actually harder to ask the VMM directly for what memory limit it was configured with compared to asserting on the VMM process's memory usage, then I'd be fine with asserting on the VMM process's memory usage. I just highly suspect that the issue Sam brought up with needing to deal with the VMM's own memory usage introduces enough complication to that approach that it's probably easier to write a non-flaky test that just talks to the VMM API endpoint to make assertions on the outcome of a CreateVM call.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah. I agree that this test is doing what Firecracker supposed to do. Firecracker has some tests like firecracker-microvm/firecracker@36583c6 to confirm the behavior.
Checking only the data transform layer would not cover handling of stdout, stderr and/or exit code, but those are technically covered by the disk usage test I have in this PR.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I have removed that from this PR. Still not so sure what would be the best way to confirm the behavior.
We may also set the limit through https://github.com/opencontainers/runtime-spec/blob/master/config-linux.md#memory.
099d169
to
558a4ca
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm definitely not wild about this, as it really seems like a test of the snapshotter itself, but I'm okay with it going in for now. Maybe in the future we can move this to just be a test of the snapshotter.
} | ||
} | ||
|
||
func TestDiskLimit_Isolated(t *testing.T) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: I'd move TestDiskLimit_Isolated
up to the top of the file since this is actually the entry point and the other functions are helpers.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Okay. Will do.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Let me move runTask to integ_test.go since it can be used from other tests.
Signed-off-by: Kazuyoshi Kato <[email protected]>
…ependabot/go_modules/github.com/containernetworking/plugins-0.9.0 Bump github.com/containernetworking/plugins from 0.8.7 to 0.9.0
Issue #, if available:
NA
Description of changes:
This change adds integration tests that verify in-container workloads
are restricted correctly.
By submitting this pull request, I confirm that my contribution is made under the terms of the Apache 2.0 license.