Skip to content

Adding v0.15.0 firecracker docker image #87

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
wants to merge 6 commits into from

Conversation

xibz
Copy link
Contributor

@xibz xibz commented Mar 9, 2019

Adds a docker file that allows for testing of v0.15.0 of Firecracker.

By submitting this pull request, I confirm that my contribution is made under the terms of the Apache 2.0 license.

Copy link
Contributor

@nmeyerhans nmeyerhans left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In keeping with convention, please name your docker file Dockerfile. Append a suffix for clarity if you want, e.g. Dockerfile.fc-latest

Copy link
Contributor

@sipsma sipsma left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just FYI, I think in the fullness of time it might be beneficial for us to merge your efforts here with what I just sent out for PR in the firecracker-containerd repo. There is a fair amount of overlap (i.e. both Dockerfiles build firecracker).

For now it's probably best to get the two Dockerfiles out separately, but we can think about how we could have one unified Dockerfile for both, which should make maintenance and consistency simpler


WORKDIR "$FIRECRACKER_GO_SDK_SRC_DIR"
RUN cp -rf testdata/* "$TMP_BUILD_DIR"
RUN curl -fsSL -o "$TMP_BUILD_DIR"/vmlinux https://s3.amazonaws.com/spec.ccfc.min/img/hello/kernel/hello-vmlinux.bin
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You can use ADD https://s3.amazonaws.com/spec.ccfc.min/img/hello/kernel/hello-vmlinux.bin <destination inside container>, which will result in the build checking to see if the remote file changed and re-download it if so.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ADD has some surprising behavior around auto-extraction of tarballs and always creating a separate layer, so I don't really have a problem avoiding it. See this best-practices document for some details.

musl-tools

# Add musl to rust std
RUN mkdir "$TMP_BUILD_DIR" \
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why do you need to use "$TMP_BUILD_DIR"? Did you want to make this variable instead of picking a constant path?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Naw, it could be a constant path, just thought ENV were specifically for environment variables and didn't know what is considered to be a constant other than ENV. The TMP_BUILD_DIR is used as the test folder where it searches for firecracker, jailer, kernel image, and root files.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would probably just write the literal /build instead of using an environment variable.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I went with using an ENV just due to it being easier to change if we ever do decide to move it from /build.


WORKDIR "$FIRECRACKER_GO_SDK_SRC_DIR"
RUN cp -rf testdata/* "$TMP_BUILD_DIR"
RUN curl -fsSL -o "$TMP_BUILD_DIR"/vmlinux https://s3.amazonaws.com/spec.ccfc.min/img/hello/kernel/hello-vmlinux.bin
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ADD has some surprising behavior around auto-extraction of tarballs and always creating a separate layer, so I don't really have a problem avoiding it. See this best-practices document for some details.

ARG FIRECRACKER_SRC_DIR="/firecracker"
ARG FIRECRACKER_BUILD_DIR="$FIRECRACKER_SRC_DIR/build"
ARG CARGO_REGISTRY_DIR="$FIRECRACKER_BUILD_DIR/cargo_registry"
ARG FIRECRACKER_VERSION="0.15.0"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

While I think you want to get rid of most of the ARG statements, this one makes a lot of sense to keep.

&& rustup target add x86_64-unknown-linux-musl

# Install Firecracker and set binaries to TMP_BUILD_DIR
RUN git clone https://github.com/firecracker-microvm/firecracker.git
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is there any particular reason you have these as separate RUN statements instead of chaining them together? In general, each directive (like RUN, ENV, etc) adds a new layer to the resulting image. When you're building images, you generally want to reduce the total number of layers except when you want to share layers between disparate images (think a base OS or language image). This firecracker-build one is internal only and you never really push it as an artifact, but introducing separate layers still has a performance impact during the build as you cause Docker's storage driver (layer storage component) to create new snapshots for each one.

I'd rewrite like this:

RUN git clone https://github.com/firecracker-microvm/firecracker.git \
    cd firecracker \
    git checkout tags/v"$FIRECRACKER_VERSION" \
    cargo build --release --features vsock --target-dir /build \
    ...

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Had no idea that having multiple RUN commands caused such overhead was the main reason. I've went ahead and updated it based on your feedback.

@xibz xibz requested review from samuelkarp and nmeyerhans March 14, 2019 22:08
Copy link
Contributor

@nmeyerhans nmeyerhans left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This works. One last thing: we'll need to update the clean rule to remove the stamp file and anything else created on the filesystem that isn't already cleaned up with go clean. I doubt we'll want to also remove the generated image from Docker, but that is up to you.

@xibz xibz requested a review from nmeyerhans March 18, 2019 18:23
@@ -27,4 +27,27 @@ all-tests:
generate build clean:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You should remove "clean" from this line.

@@ -27,4 +27,27 @@ all-tests:
generate build clean:
go $@ $(EXTRAGOARGS)

.PHONY: all generate clean build test unit-tests all-tests
clean:
go $@ $(EXTRAGOARGS)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Since you don't have a variable argument, you can just write go clean $(EXTRAGOARGS).

clean:
go $@ $(EXTRAGOARGS)
# --force to prevent erroring on stamp file not existing
rm --force sandbox-test-fc-build-stamp
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You can also write rm sandbox-test-fc-build-stamp ||: prevent an error. || is a logical or and : is a builtin for true.

@xibz
Copy link
Contributor Author

xibz commented Aug 7, 2019

Closing due to inactivity and age.

@xibz xibz closed this Aug 7, 2019
pendo324 pushed a commit to pendo324/firecracker-go-sdk that referenced this pull request Aug 27, 2024
Issue firecracker-microvm#87 was previously addressed by having our shim process exit when all
tasks it was managing exited. However, it turns out that is not the expected
behavior for a containerd shim. The expected behavior is that a shim continue
running until all tasks have been *deleted* by the client (via the containerd
Delete API call).

When any task has been deleted (or create of a task fails) containerd will send
the shim a Shutdown API call, at which time the shim is expected to exit if it
*all* tasks it was managing have been deleted.

This commit fixes brings our shim in line with that expected behavior. It,
however, also retains the existing FCControl parameter "ExitAfterAllTasksGone"
(though now renamed to "ExitAfterAllTasksDeleted") to give users the ability
to specify whether the default containerd shim behavior should be ignored and
instead allow the Shim+VM to keep running even after all Tasks have been
deleted.

The majority of the changes here actually end up just being a refactor of the
TaskManager interface to safely handle Create/Delete task APIs alongside
checking whether the shim has any tasks left being managed (which is uses to
decide if it should shutdown). The refactorization also ensures that all IO
is flushed before a Delete call returns (which is a better solution to handling
I/O races than the temporary fix applied in 1e36219).

Signed-off-by: Erik Sipsma <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants