-
Notifications
You must be signed in to change notification settings - Fork 1.3k
Install Gitpod in Harvester-based k3s cluster for preview environments (opt-in) #7272
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
This pull request has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. |
Easy now stale-bot, we're just taking a break over the winter holidays 😉 |
cd9f014
to
ad09dbd
Compare
d561561
to
d2c7070
Compare
d2c7070
to
43e34af
Compare
Great stuff! I copied the branch to try it but I get the following error: Also I notice that the job didn't fail because of this error. |
Discussed in slack that the error is expected and we'll fix it in a follow-up PR. /approve |
LGTM label has been added. Git tree hash: aec5376f8a20200a54024c55e830cbeaed6f3403
|
[APPROVALNOTIFIER] This PR is APPROVED Approval requirements bypassed by manually added approval. This pull-request has been approved by: meysholdt Associated issue: #7 The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
Description
This provides Werft with kubectl access to the k3s cluster running inside of the Harvester-managed VM based preview environment so that we can install Gitpod.
It is outside of the scope of this PR to get the Gitpod installation working. This PR is an step towards getting there, but we'll focus on getting the Gitpod installation working properly in follow up PRs; this gives us something to iterate on
It currently achieves it by adding the [email protected] SSH key to the VM. The keys are also stored in core-dev in the
harvester-vm-ssh-keys
secret. The keys are used in the Werft job to copy out thekubeconfig
file for the k3s cluster.SSH and Kube API access to the VM is achieved by port-forwarding. In a follow up PR we're hoping to have the Harvester ingress take care of the proxying so we don't have to do that in the Werft job.
The cloudinit has been extended to install k3s, CertManager, and create the certs namespace.
Addiotionally, instead of using a secret for the cloudinit this just has it inline. I found that easier to work with, and given the Secret was public anyway (plaintext in this repository) there wasn't a lot of reason to use the Secret.
Related Issue(s)
Part of https://github.com/gitpod-io/harvester/issues/7
How to test
Trigger the job with the
with-vm
option:This will boot the VM and try to install Gitpod. it won't be able to install Gitpod properly yet, but getting Gitpod fully operational is outside the scope of this PR.
I have added a few debug tips below.
Get a shell in the Werft job pod to debug
If you want do debug it might use useful to get a shell inside of the the Werft job so you can poke around:
If you want to delete the pod:
# From a workspace kubectl -n werft delete pod gitpod-build-mads-harvester-k3s.25
Deleting the VM so you get a new one in the next job
If you're modifying the cloudinit or for whatever reason want to start a fresh VM then you can delete your VM and all related resources by deleting the namespace in Harvester
SSHing to the VM from a workspace
If you want to SSH to the VM you can grab the SSH keys and start the proxy manually and then SSH into the VM
Getting kubectl access to k3s in VM
This assume you have SSH access to the VM as described above
Release Notes
Documentation