-
Notifications
You must be signed in to change notification settings - Fork 1.3k
Closed as not planned
Closed as not planned
Copy link
Labels
type: bugSomething isn't workingSomething isn't working
Description
Bug description
In other words, we don't experience data loss, but, the pod stops gracefully, and when the user starts the workspace again, they would not have their data...even though we have it in a PV.
I tried deleting us72
, but could not because there were two dangling PVC:
gitpod /workspace/gitpod (main) $ kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
ws-ccd64f44-d3b6-49eb-9d1e-9275406745ef Bound pvc-35a16057-d21c-44e2-9a76-a36f44fb1866 30Gi RWO csi-gce-pd-g1-standard 47h
ws-eb6cb985-86f3-435b-9def-d820d2b9060a Bound pvc-50708c34-a721-4f3d-855e-f74c94e2c034 30Gi RWO csi-gce-pd-g1-standard 45h
For the first...given workspace logs and this workspace trace:
- Startworkspace is logged on workspace
- It cannot be scheduled (waiting for scale-up)
- Start workspace is logged again after 7 minutes (still not scheduled to a node)
- Which lines up with us seeing startWorkspace error at 7m in the traces
- We poll for seven minutes, to see if the pending pod should be recreated, and startWorkspace called again
- We force delete the original pod and try starting again using original context 🤯
- Introduced in Refactor Manager StartWorkspace #11547
- csi provisioner is started for this workspace
- Ring 0 stops, we must've landed on workspace-ws-us72-standard-pvc-pool-2dvw
- workspace cannot connect to ws-daemon
- workspace fails to start, the volume snapshot is empty
- workspace fails to start is logged again
Steps to reproduce
This could be because of:
- node scale-up that is too slow (given the logs, perhaps).
- Another possibility, is that we restarted ws-manager "during a key moment", and it the snapshot did not proceed. We restarted ws-manager a couple times this week.
- The 1h timeout, which is a byproduct from when we persisted using node storage and backed up using GCS
So, either:
- Create an ephemeral cluster
- Start many workspaces with loadgen, causing node scale-up, if it takes >7 minutes, you'll hit the code path that was involved for these two workspaces
- Stop the workspaces
- Check to see if they backed up, or, left behind PVCs.
or maybe
Stop a bunch of workspaces, and while they're stopping (before, during, and after snapshot) stop ws-manager
.
or maybe
timeout after an hour
Workspace affected
gitpodio-templatetypesc-qxnleu3pzu4
Expected behavior
There are a few things:
- We should try to backup for longer than 1 h, this way we do not have to manually snapshot PVCs before we delete workspace clusters.
- We should have a metric track duration for when a PVC is bound but has no pod, and a trigger an alert when one or more exists for too long.
Questions:
- The workspace affected was gracefully Stopped (not Failed or Stopping), which indicates the user could have tried restarting their workspace and not restored their files. This would have been very confusing because they will not have their uncommitted files restored. Is this expected?
Example repository
No response
Anything else?
We currently stop trying to backup after a 1h timeout. This was a design decision for object storage based backups, and should be revisited as part of PVC.
Metadata
Metadata
Assignees
Labels
type: bugSomething isn't workingSomething isn't working