|
| 1 | +--- |
| 2 | +title: Walkthrough |
| 3 | +sidebar_label: Walkthrough |
| 4 | +--- |
| 5 | + |
| 6 | +import Tabs from '@theme/Tabs'; |
| 7 | +import TabItem from '@theme/TabItem'; |
| 8 | +import TenancySupport from '../../../_fragments/tenancy-support.mdx'; |
| 9 | +import Mark from "@site/src/components/Mark"; |
| 10 | +import InstallCli from '../../../_partials/deploy/install-cli.mdx'; |
| 11 | +import KubeconfigUpdate from '@site/docs/_partials/kubeconfig_update.mdx'; |
| 12 | + |
| 13 | +This guide walks you through creating volume snapshots for a vCluster with persistent data and restoring that data from the snapshot. You'll deploy a sample application that writes data to a persistent volume, create a snapshot, simulate data loss, and restore from the snapshot using AWS EBS as the storage provider. |
| 14 | + |
| 15 | +:::info Supported CSI Drivers |
| 16 | +vCluster officially supports volume snapshots with **AWS EBS CSI Driver** and **OpenEBS**. This walkthrough demonstrates the complete end-to-end process using AWS as an example. Similar steps can be adapted for other supported CSI drivers. |
| 17 | +::: |
| 18 | + |
| 19 | +## Prerequisites |
| 20 | + |
| 21 | +Before starting, ensure you have: |
| 22 | + |
| 23 | +- An existing Amazon EKS cluster with the EBS CSI Driver installed. Follow the [EKS deployment guide](../../../deploy/control-plane/container/environment/eks) to set up your cluster. |
| 24 | +- The vCluster CLI installed |
| 25 | +- Completed the [volume snapshots setup](./#setup) based on your chosen tenancy model |
| 26 | +- Configured the default VolumeSnapshotClass for the EBS CSI driver |
| 27 | +- An OCI-compatible registry (such as GitHub Container Registry, Docker Hub, or AWS ECR) or an S3-compatible bucket (AWS S3 or MinIO) for storing snapshots |
| 28 | + |
| 29 | +:::note |
| 30 | +You can skip the CSI driver installation steps in the setup guide as the EBS CSI driver is already installed during EKS cluster creation. |
| 31 | +::: |
| 32 | + |
| 33 | +## Deploy vCluster |
| 34 | + |
| 35 | +Choose the deployment option based on your tenancy model: |
| 36 | + |
| 37 | +<Tabs |
| 38 | + groupId="tenancy-model" |
| 39 | + defaultValue="host-nodes" |
| 40 | + values={[ |
| 41 | + { label: "vCluster with shared host cluster nodes", value: "host-nodes" }, |
| 42 | + { label: "vCluster with private nodes", value: "private-nodes" }, |
| 43 | + ] |
| 44 | +}> |
| 45 | +<TabItem value="host-nodes"> |
| 46 | + |
| 47 | +Create a vCluster with default values: |
| 48 | + |
| 49 | +```bash title="Create vCluster" |
| 50 | +vcluster create myvcluster |
| 51 | +``` |
| 52 | + |
| 53 | +</TabItem> |
| 54 | +<TabItem value="private-nodes"> |
| 55 | + |
| 56 | +For private nodes, you'll need to create an EC2 instance and join it to the vCluster. Follow the [private nodes documentation](../../../deploy/worker-nodes/private-nodes/join) to join the instance as a node in the vCluster. |
| 57 | + |
| 58 | +Create the vCluster with volume snapshot support enabled: |
| 59 | + |
| 60 | +```bash title="Create vCluster with private nodes" |
| 61 | +vcluster create myvcluster --values vcluster.yaml |
| 62 | +``` |
| 63 | + |
| 64 | +```yaml title="vcluster.yaml" |
| 65 | +pro: true |
| 66 | +privateNodes: |
| 67 | + enabled: true |
| 68 | + autoUpgrade: |
| 69 | + imagePullPolicy: Never |
| 70 | +controlPlane: |
| 71 | + service: |
| 72 | + spec: |
| 73 | + type: LoadBalancer |
| 74 | +networking: |
| 75 | + podCIDR: 10.64.0.0/16 |
| 76 | + serviceCIDR: 10.128.0.0/16 |
| 77 | +deploy: |
| 78 | + volumeSnapshotController: |
| 79 | + enabled: true |
| 80 | +``` |
| 81 | +
|
| 82 | +</TabItem> |
| 83 | +</Tabs> |
| 84 | +
|
| 85 | +## Deploy a demo application |
| 86 | +
|
| 87 | +Deploy a sample application in the vCluster. This application writes the current date and time in five-second intervals to a file called `out.txt` on a persistent volume. |
| 88 | + |
| 89 | +```bash title="Deploy application with persistent storage" |
| 90 | +cat <<EOF | kubectl apply -f - |
| 91 | +apiVersion: v1 |
| 92 | +kind: PersistentVolumeClaim |
| 93 | +metadata: |
| 94 | + name: ebs-claim |
| 95 | +spec: |
| 96 | + accessModes: |
| 97 | + - ReadWriteOnce |
| 98 | + resources: |
| 99 | + requests: |
| 100 | + storage: 4Gi |
| 101 | +--- |
| 102 | +apiVersion: v1 |
| 103 | +kind: Pod |
| 104 | +metadata: |
| 105 | + name: app |
| 106 | +spec: |
| 107 | + containers: |
| 108 | + - name: app |
| 109 | + image: public.ecr.aws/amazonlinux/amazonlinux |
| 110 | + command: ["/bin/sh"] |
| 111 | + args: ["-c", "while true; do date -u >> /data/out.txt; sleep 5; done"] |
| 112 | + volumeMounts: |
| 113 | + - name: persistent-storage |
| 114 | + mountPath: /data |
| 115 | + volumes: |
| 116 | + - name: persistent-storage |
| 117 | + persistentVolumeClaim: |
| 118 | + claimName: ebs-claim |
| 119 | +EOF |
| 120 | +``` |
| 121 | + |
| 122 | +### Verify the application |
| 123 | + |
| 124 | +Wait until the pod is running and the PVC is in `Bound` state: |
| 125 | + |
| 126 | +```bash title="Check pod status" |
| 127 | +kubectl get pods |
| 128 | +``` |
| 129 | + |
| 130 | +Expected output: |
| 131 | +``` |
| 132 | +NAME READY STATUS RESTARTS AGE |
| 133 | +app 1/1 Running 0 37s |
| 134 | +``` |
| 135 | + |
| 136 | +```bash title="Check PVC status" |
| 137 | +kubectl get pvc |
| 138 | +``` |
| 139 | + |
| 140 | +Expected output: |
| 141 | +``` |
| 142 | +NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS VOLUMEATTRIBUTESCLASS AGE |
| 143 | +ebs-claim Bound pvc-4062a395-e84e-4efd-91c4-8e09cb12d3a8 4Gi RWO <unset> 42s |
| 144 | +``` |
| 145 | + |
| 146 | +Verify that data is being written to the persistent volume: |
| 147 | + |
| 148 | +```bash title="View application data" |
| 149 | +kubectl exec -it app -- cat /data/out.txt | tail -n 3 |
| 150 | +``` |
| 151 | + |
| 152 | +Expected output: |
| 153 | +``` |
| 154 | +Tue Oct 28 13:38:41 UTC 2025 |
| 155 | +Tue Oct 28 13:38:46 UTC 2025 |
| 156 | +Tue Oct 28 13:38:51 UTC 2025 |
| 157 | +``` |
| 158 | + |
| 159 | + |
| 160 | +## Create snapshot with volumes |
| 161 | + |
| 162 | +Create a vCluster snapshot with volume snapshots included by using the `--include-volumes` parameter. The vCluster CLI creates a snapshot request in the host cluster, which is then processed in the background by the vCluster snapshot controller. |
| 163 | + |
| 164 | +Disconnect from the vCluster: |
| 165 | + |
| 166 | +```bash title="Disconnect from vCluster" |
| 167 | +vcluster disconnect |
| 168 | +``` |
| 169 | + |
| 170 | +Create the snapshot: |
| 171 | + |
| 172 | +```bash title="Create snapshot with volumes" |
| 173 | +vcluster snapshot create myvcluster "oci://ghcr.io/my-user/my-repo:my-tag" --include-volumes |
| 174 | +``` |
| 175 | + |
| 176 | +Expected output: |
| 177 | +``` |
| 178 | +18:01:13 info Beginning snapshot creation... Check the snapshot status by running `vcluster snapshot get myvcluster oci://ghcr.io/my-user/my-repo:my-tag` |
| 179 | +``` |
| 180 | + |
| 181 | +:::note |
| 182 | +Replace `oci://ghcr.io/my-user/my-repo:my-tag` with your own OCI registry or other storage location. Ensure you have the necessary authentication configured for it. |
| 183 | +::: |
| 184 | + |
| 185 | +### Check snapshot status |
| 186 | + |
| 187 | +Monitor the snapshot creation progress: |
| 188 | + |
| 189 | +```bash title="Check snapshot status" |
| 190 | +vcluster snapshot get myvcluster "oci://ghcr.io/my-user/my-repo:my-tag" |
| 191 | +``` |
| 192 | + |
| 193 | +Expected output: |
| 194 | +``` |
| 195 | + SNAPSHOT | VOLUMES | SAVED | STATUS | AGE |
| 196 | + -----------------------------------------+---------+-------+-----------+-------- |
| 197 | + oci://ghcr.io/my-user/my-repo:my-tag | 1/1 | Yes | Completed | 2m51s |
| 198 | +``` |
| 199 | + |
| 200 | +Wait until the status shows `Completed` and `SAVED` shows `Yes` before proceeding to the restore step. |
| 201 | + |
| 202 | +## Simulate data loss |
| 203 | + |
| 204 | +To demonstrate the restore functionality, delete the application and its data from the virtual cluster. First, connect to the vCluster: |
| 205 | + |
| 206 | +```bash title="Connect to vCluster" |
| 207 | +vcluster connect myvcluster |
| 208 | +``` |
| 209 | + |
| 210 | +Delete the application and PVC: |
| 211 | + |
| 212 | +```bash title="Delete application and PVC" |
| 213 | +cat <<EOF | kubectl delete -f - |
| 214 | +apiVersion: v1 |
| 215 | +kind: PersistentVolumeClaim |
| 216 | +metadata: |
| 217 | + name: ebs-claim |
| 218 | +spec: |
| 219 | + accessModes: |
| 220 | + - ReadWriteOnce |
| 221 | + resources: |
| 222 | + requests: |
| 223 | + storage: 4Gi |
| 224 | +--- |
| 225 | +apiVersion: v1 |
| 226 | +kind: Pod |
| 227 | +metadata: |
| 228 | + name: app |
| 229 | +spec: |
| 230 | + containers: |
| 231 | + - name: app |
| 232 | + image: public.ecr.aws/amazonlinux/amazonlinux |
| 233 | + command: ["/bin/sh"] |
| 234 | + args: ["-c", "while true; do date -u >> /data/out.txt; sleep 5; done"] |
| 235 | + volumeMounts: |
| 236 | + - name: persistent-storage |
| 237 | + mountPath: /data |
| 238 | + volumes: |
| 239 | + - name: persistent-storage |
| 240 | + persistentVolumeClaim: |
| 241 | + claimName: ebs-claim |
| 242 | +EOF |
| 243 | +``` |
| 244 | + |
| 245 | +## Restore from snapshot |
| 246 | + |
| 247 | +Restore the vCluster from the snapshot, including the volume data. First, disconnect from the vCluster: |
| 248 | + |
| 249 | +```bash title="Disconnect from vCluster" |
| 250 | +vcluster disconnect |
| 251 | +``` |
| 252 | + |
| 253 | +Run the restore command with the `--restore-volumes` parameter. This creates a restore request which is processed by the restore controller, orchestrating the restoration of the PVC from the snapshots: |
| 254 | + |
| 255 | +```bash title="Restore vCluster with volumes" |
| 256 | +vcluster restore myvcluster "oci://ghcr.io/my-user/my-repo:my-tag" --restore-volumes |
| 257 | +``` |
| 258 | + |
| 259 | +Expected output: |
| 260 | +``` |
| 261 | +17:39:14 info Pausing vCluster myvcluster |
| 262 | +17:39:15 info Scale down statefulSet vcluster-myvcluster/myvcluster... |
| 263 | +17:39:17 info Starting snapshot pod for vCluster vcluster-myvcluster/myvcluster... |
| 264 | +... |
| 265 | +2025-10-27 12:09:35 INFO snapshot/restoreclient.go:260 Successfully restored snapshot from oci://ghcr.io/my-user/my-repo:my-tag {"component": "vcluster"} |
| 266 | +17:39:37 info Resuming vCluster myvcluster after it was paused |
| 267 | +``` |
| 268 | + |
| 269 | +### Verify the restore |
| 270 | + |
| 271 | +Once the vCluster is running again, connect to it and verify that the pod and PVC have been restored: |
| 272 | + |
| 273 | +```bash title="Connect to vCluster" |
| 274 | +vcluster connect myvcluster |
| 275 | +``` |
| 276 | + |
| 277 | +Check that the pod is running: |
| 278 | + |
| 279 | +```bash title="Check pod status" |
| 280 | +kubectl get pods |
| 281 | +``` |
| 282 | + |
| 283 | +Expected output: |
| 284 | +``` |
| 285 | +NAME READY STATUS RESTARTS AGE |
| 286 | +app 1/1 Running 0 12m |
| 287 | +``` |
| 288 | + |
| 289 | +Check that the PVC is bound: |
| 290 | + |
| 291 | +```bash title="Check PVC status" |
| 292 | +kubectl get pvc |
| 293 | +``` |
| 294 | + |
| 295 | +Expected output: |
| 296 | +``` |
| 297 | +NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS VOLUMEATTRIBUTESCLASS AGE |
| 298 | +ebs-claim Bound pvc-c6ebf439-9fe5-4413-9f86-89916c1e4e49 4Gi RWO <unset> 12m |
| 299 | +``` |
| 300 | + |
| 301 | +Verify that the data was successfully restored by checking the log file: |
| 302 | + |
| 303 | +```bash title="Verify restored data" |
| 304 | +kubectl exec -it app -- cat /data/out.txt |
| 305 | +``` |
| 306 | + |
| 307 | +Expected output (showing both old and new timestamps): |
| 308 | +``` |
| 309 | +... |
| 310 | +Tue Oct 28 13:39:21 UTC 2025 |
| 311 | +Tue Oct 28 13:39:26 UTC 2025 |
| 312 | +Tue Oct 28 13:39:31 UTC 2025 |
| 313 | +Tue Oct 28 13:46:10 UTC 2025 |
| 314 | +Tue Oct 28 13:46:15 UTC 2025 |
| 315 | +Tue Oct 28 13:46:20 UTC 2025 |
| 316 | +``` |
| 317 | + |
| 318 | +Notice the gap in timestamps. The earlier timestamps (around 13:39) are from before the deletion, while the later timestamps (13:46) are after the restore. This confirms that the data was successfully recovered from the snapshot, and the application resumed writing new entries. |
| 319 | + |
| 320 | +## Cleanup |
| 321 | + |
| 322 | +To remove the resources created in this tutorial: |
| 323 | + |
| 324 | +Delete the vCluster: |
| 325 | + |
| 326 | +```bash title="Delete vCluster" |
| 327 | +vcluster delete myvcluster |
| 328 | +``` |
| 329 | + |
| 330 | +If you created an EKS cluster specifically for this tutorial, you can delete it to avoid ongoing charges: |
| 331 | + |
| 332 | +```bash title="Delete EKS cluster" |
| 333 | +eksctl delete cluster -f cluster.yaml --disable-nodegroup-eviction |
| 334 | +``` |
0 commit comments