-
Notifications
You must be signed in to change notification settings - Fork 5k
Kicbase/ISO: Update buildroot from 2023.02.9 to 2025.2 #20720
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
/ok-to-build-image |
/ok-to-build-iso |
/ok-to-test |
It works locally on my own machine. Let's see if it can build ISO successfully on CI |
This comment has been minimized.
This comment has been minimized.
Hi @ComradeProgrammer, we have updated your PR with the reference to newly built ISO. Pull the changes locally if you want to test with them or update your PR further. |
This comment has been minimized.
This comment has been minimized.
# the go version on the line below is for the ISO | ||
GOLANG_OPTIONS = GO_VERSION=1.21.6 GO_HASH_FILE=$(PWD)/deploy/iso/minikube-iso/go.hash | ||
GOLANG_OPTIONS = GO_VERSION=1.23.4 GO_HASH_FILE=$(PWD)/deploy/iso/minikube-iso/go.hash |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why 1.23.4? Latest is 1.23.8.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
we currently use GO_VERSION 1.24.0 in this Makefile (set above), what is the reason for overriding it here?
i also noticed that our go.mod still uses 1.23.4 though, not sure if we're blocked on bumping it as well and then have all go versions in sync
GOARCH=arm64 \ | ||
GOPROXY="https://proxy.golang.org,direct" \ | ||
GOSUMDB='sum.golang.org'\ | ||
GOOS=linux |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why the new options are needed?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For newer version of go, if we don't set those options, the build will fail.
E.g. after go 1.21 GOPROXY no longer tolerates an empty string when GOMOD111 is turned on
See golang/go#61928 (comment)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
based on Mills' comment in the linked issue, looks like if we're building go v1.21.0+ from source we also need to create the $GOROOT/go.env that sets GOPROXY and GOSUMDB, and we probably do not need to implicitly set the GOARCH and GOOS (those should be automatically inferred) so these additional changes would not be needed - here and in few other places/files below?
the additional reason to avoid setting these manually in several places would be easier maintenance - we'd avoid failing because we have not manually added it to all other places that might need it and the relevant default values should be taken from the go release itself
GOARCH=arm64 \ | ||
GOPROXY="https://proxy.golang.org,direct" \ | ||
GOSUMDB='sum.golang.org'\ | ||
GOOS=linux |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Unify indent?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think the intents here are already the same with other go envs?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
i think that Nir meant that we should have same alignment (ie, same number of blanks before these last three lines as lines above), but see my previous comment about avoiding adding it altogether
deploy/iso/minikube-iso/arch/x86_64/package/docker-buildx/docker-buildx.mk
Show resolved
Hide resolved
@@ -102,7 +102,7 @@ decryption_keys_path = "/etc/crio/keys/" | |||
|
|||
# Path to the conmon binary, used for monitoring the OCI runtime. | |||
# Will be searched for using $PATH if empty. | |||
conmon = "/usr/libexec/crio/conmon" | |||
conmon = "/usr/bin/conmon" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The path was changed?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes. New buildroot includes conmon by default, and the new path is this
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: we could also update the TestDockerSystemInfo to reflect the new conmon path
a044a4c
to
5bb96d0
Compare
/ok-to-test |
This comment has been minimized.
This comment has been minimized.
/ok-to-test |
|
||
// run systemctl reset-failed for a service | ||
// some services declare a realitive small restart-limit in their .service configuration | ||
// so we reset reset-failed counter to override the limit |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can we change the service configuration instead? It will avoid the fake ResetFailed interface we add here.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good idea. I guess it is possible, but perhaps we can do this in next PR. Currently buildroot issues are blocking us from building ISO and we cannot update crio, containerd or anything else which involves go>=1.22. It is a rather urgent one
The .service file from cir-dockerd is this cri-docker.service
where it declares StartLimitBurst=3 StartLimitInterval=60s . I am not sure but I guess this is the problem, because journalctl -u cri-docker.service
always shows cri-docker.service: Start request repeated too quickly.
.
I guess it may also work if we remove these two lines from cri-containerd via go code
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why do we have a burst of start requests?
If the systemd unit is defined properly, system will start the service when dependent service are ready and we should not see such issue.
I guess we install the services dynamically when creating the machine (since we don't know at build time which container runtime will be used). And we probably start them manually without considering the dependencies between services, and then retry failed services?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
And we probably start them manually without considering the dependencies between services, and then retry failed services?
I agree. For this issue specifically, accroding to the log here my guessing is that: somehow when we try to start cri-dockerd, the docker daemon/socket is not ready.
However I did tried to wait for docker service/socket with r.Init.Active("docker")
before restarting cri-containerd, but it doesn't work at all. r.Init.Active("docker")
return true while cri-containerd continue to complain that Cannot connect to the Docker daemon at unix:///var/run/...
.
So I just came up with this temporary brute-force solution, forcing restart of all those services, and it works. This is definitely not a good idea, I think we should continue to investigate it and see what we can do to actually solve this.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
However I did tried to wait for docker service/socket with
r.Init.Active("docker")
before restarting cri-containerd, but it doesn't work at all.r.Init.Active("docker")
return true while cri-containerd continue to complain thatCannot connect to the Docker daemon at unix:///var/run/...
.
is-active is not documented to return true when the service is ready:
is-active PATTERN...
Check whether any of the specified units are active (i.e. running).
Returns an exit code 0 if at least one is active, or non-zero
otherwise. Unless --quiet is specified, this will also print the
current unit state to standard output.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For this issue specifically, accroding to the log here my guessing is that: somehow when we try to start cri-dockerd, the docker daemon/socket is not ready.
This happens here?
minikube/pkg/minikube/cruntime/docker.go
Line 174 in 8575bee
// restart cri-docker |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yes, this is exactly the place where it happened.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this behaviour is strange/unexpected i think: we have Type=notify
set and NotifyAccess
not set, which should mean that the service (ie, its main process) will send the READY=1
signal only when actually "ready", and that should be picked up by the is-active
- not sure why it would not work in our setup
ref: https://www.freedesktop.org/software/systemd/man/latest/systemd.service.html
/ok-to-test |
/ok-to-build-iso |
/ok-to-build-image |
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
I tested this on Fedora 42 with kvm driver and containerd runtime. My test creates 3 single node clusters, deploy ocm, olm, rook-ceph, submariner, velero, volsync, minio, and ramen and run end to end disaster recovery tests. Looking at the first failures in KVM_linux_containerd - we have many of these:
We don't pull any image from docker.io to avoid these failures. |
@@ -47,7 +47,7 @@ KVM_GO_VERSION ?= $(GO_VERSION:.0=) | |||
|
|||
|
|||
INSTALL_SIZE ?= $(shell du out/minikube-windows-amd64.exe | cut -f1) | |||
BUILDROOT_BRANCH ?= 2024.11.2 | |||
BUILDROOT_BRANCH ?= 2025.02 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why 2024.11.2 (what we used for other parts) is not right here?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Well it is still about falco-modules(but it is not working, as you can see)
It seems that after we update the buildroot, buildroot will append the FALCO_MODULE_INSTALL_STAGING_OPTS and the FALCO_MODULE_INSTALL_TARGET_OPTS to cmake command, which causes an iso build failure. I don't know why this happens, so I tried some other versions. But it still cannot work. All those weird changes about falco-modules (for which I didn't give any reason why I made them) are basically all for falco-modules
I guess I will remove falco-modules for now, which should work, I guess
BTW I think we should still keep using 2025.02 because i found that in buildroot, 20xx.xx are version numbers for long term supports, while 20xx.xx.xx is not. If we want to update the buildroot, I think maybe LTS version is better
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LTS sounds good but we don't have to do this change now.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
uhh probably you misunderstood this change? The current buildroot version in main branch is BUILDROOT_BRANCH ?= 2023.02.9
. In this PR I chosen 2024.11.2 at the beginning but then I submitted another commit which change it to 2025.2
Since we have to update this buildroot in this PR, why should't we update it to the latest LTS version?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The pr message says "Update buildroot from 2023.02.9 to 2024.11.2"
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I guess I was also confused by the commit message: "fix falco-modules" - I assumed you update falco version. I see now that this updates BUILDROOT_BRANCH.
/ok-to-build-iso /ok-to-test |
This comment has been minimized.
This comment has been minimized.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
thanks for the efforts you've put into this @ComradeProgrammer!
please have a look at few comments i left
# the go version on the line below is for the ISO | ||
GOLANG_OPTIONS = GO_VERSION=1.21.6 GO_HASH_FILE=$(PWD)/deploy/iso/minikube-iso/go.hash | ||
GOLANG_OPTIONS = GO_VERSION=1.23.4 GO_HASH_FILE=$(PWD)/deploy/iso/minikube-iso/go.hash |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
we currently use GO_VERSION 1.24.0 in this Makefile (set above), what is the reason for overriding it here?
i also noticed that our go.mod still uses 1.23.4 though, not sure if we're blocked on bumping it as well and then have all go versions in sync
@@ -35,3 +35,4 @@ sha256 36930162a93df417d90bd22c6e14daff4705baac2b02418edda671cdfa9cd07f go1.23 | |||
sha256 8d6a77332487557c6afa2421131b50f83db4ae3c579c3bc72e670ee1f6968599 go1.23.3.src.tar.gz | |||
sha256 ad345ac421e90814293a9699cca19dd5238251c3f687980bbcae28495b263531 go1.23.4.src.tar.gz | |||
sha256 d14120614acb29d12bcab72bd689f257eb4be9e0b6f88a8fb7e41ac65f8556e5 go1.24.0.src.tar.gz | |||
sha256 6924efde5de86fe277676e929dc9917d466efa02fb934197bc2eba35d5680971 go1.23.4.linux-amd64.tar.gz |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
i think that the go.hash file should not be updated manually - should be managed by the updateGoHashFile func in hack/update/golang_version/update_golang_version.go
GOARCH=arm64 \ | ||
GOPROXY="https://proxy.golang.org,direct" \ | ||
GOSUMDB='sum.golang.org'\ | ||
GOOS=linux |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
based on Mills' comment in the linked issue, looks like if we're building go v1.21.0+ from source we also need to create the $GOROOT/go.env that sets GOPROXY and GOSUMDB, and we probably do not need to implicitly set the GOARCH and GOOS (those should be automatically inferred) so these additional changes would not be needed - here and in few other places/files below?
the additional reason to avoid setting these manually in several places would be easier maintenance - we'd avoid failing because we have not manually added it to all other places that might need it and the relevant default values should be taken from the go release itself
GOARCH=arm64 \ | ||
GOPROXY="https://proxy.golang.org,direct" \ | ||
GOSUMDB='sum.golang.org'\ | ||
GOOS=linux |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
i think that Nir meant that we should have same alignment (ie, same number of blanks before these last three lines as lines above), but see my previous comment about avoiding adding it altogether
@@ -102,7 +102,7 @@ decryption_keys_path = "/etc/crio/keys/" | |||
|
|||
# Path to the conmon binary, used for monitoring the OCI runtime. | |||
# Will be searched for using $PATH if empty. | |||
conmon = "/usr/libexec/crio/conmon" | |||
conmon = "/usr/bin/conmon" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: we could also update the TestDockerSystemInfo to reflect the new conmon path
|
||
// run systemctl reset-failed for a service | ||
// some services declare a realitive small restart-limit in their .service configuration | ||
// so we reset reset-failed counter to override the limit |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this behaviour is strange/unexpected i think: we have Type=notify
set and NotifyAccess
not set, which should mean that the service (ie, its main process) will send the READY=1
signal only when actually "ready", and that should be picked up by the is-active
- not sure why it would not work in our setup
ref: https://www.freedesktop.org/software/systemd/man/latest/systemd.service.html
/ok-to-test |
1 similar comment
/ok-to-test |
kvm2 driver with docker runtime
Times for minikube start: 52.5s 53.9s 52.9s 56.8s 56.7s Times for minikube ingress: 19.2s 20.1s 15.6s 20.1s 19.1s docker driver with docker runtime
Times for minikube start: 26.7s 25.4s 26.8s 24.8s 25.3s Times for minikube (PR 20720) ingress: 12.4s 12.9s 13.4s 13.9s 12.4s docker driver with containerd runtime
Times for minikube start: 25.8s 23.1s 24.8s 23.4s 23.5s Times for minikube ingress: 22.9s 22.9s 28.9s 38.9s 38.9s |
/lgtm |
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: ComradeProgrammer, medyagh The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
thank you @ComradeProgrammer for fixing this ISO issue, this was a blocker for minikube release and you did an amazing work thanks |
No description provided.