|
| 1 | +--- |
| 2 | +title: AWS-EBS guide |
| 3 | +sidebar_label: AWS-EBS guide |
| 4 | +--- |
| 5 | + |
| 6 | +import Tabs from '@theme/Tabs'; |
| 7 | +import TabItem from '@theme/TabItem'; |
| 8 | +import TenancySupport from '../../../_fragments/tenancy-support.mdx'; |
| 9 | +import Mark from "@site/src/components/Mark"; |
| 10 | +import InstallCli from '../../../_partials/deploy/install-cli.mdx'; |
| 11 | +import KubeconfigUpdate from '@site/docs/_partials/kubeconfig_update.mdx'; |
| 12 | + |
| 13 | +This guide walks you through creating volume snapshots for a vCluster with persistent data and restoring that data from the snapshot. You'll deploy a sample application that writes data to a persistent volume, create a snapshot, simulate data loss, and restore from the snapshot using AWS EBS as the storage provider. |
| 14 | + |
| 15 | +:::info Supported CSI Drivers |
| 16 | +vCluster officially supports volume snapshots with **AWS EBS CSI Driver** and **OpenEBS**. This walkthrough demonstrates the complete end-to-end process using AWS as an example. Similar steps can be adapted for other supported CSI drivers. |
| 17 | +::: |
| 18 | + |
| 19 | +## Prerequisites |
| 20 | + |
| 21 | +Before starting, ensure you have: |
| 22 | + |
| 23 | +- An existing Amazon EKS cluster with the EBS CSI Driver installed. Follow the [EKS deployment guide](../../../../deploy/control-plane/container/environment/eks) to set up your cluster. |
| 24 | +- The vCluster CLI installed |
| 25 | +- Completed the [volume snapshots setup](./#setup) based on your chosen tenancy model |
| 26 | +- An OCI-compatible registry (such as GitHub Container Registry, Docker Hub, or AWS ECR) or an S3-compatible bucket (AWS S3 or MinIO) for storing snapshots |
| 27 | + |
| 28 | +:::note |
| 29 | +You can skip the CSI driver installation steps in the setup guide as the EBS CSI driver is already installed during EKS cluster creation. |
| 30 | +::: |
| 31 | + |
| 32 | +## Deploy vCluster |
| 33 | + |
| 34 | +Choose the deployment option based on your tenancy model: |
| 35 | + |
| 36 | +<Tabs |
| 37 | + groupId="tenancy-model" |
| 38 | + defaultValue="host-nodes" |
| 39 | + values={[ |
| 40 | + { label: "vCluster with shared host cluster nodes", value: "host-nodes" }, |
| 41 | + { label: "vCluster with private nodes", value: "private-nodes" }, |
| 42 | + ] |
| 43 | +}> |
| 44 | +<TabItem value="host-nodes"> |
| 45 | + |
| 46 | +Create a vCluster with default values: |
| 47 | + |
| 48 | +```bash title="Create vCluster" |
| 49 | +vcluster create myvcluster |
| 50 | +``` |
| 51 | + |
| 52 | +</TabItem> |
| 53 | +<TabItem value="private-nodes"> |
| 54 | + |
| 55 | +For private nodes, you'll need to create an EC2 instance and join it to the vCluster. Follow the [private nodes documentation](../../../deploy/worker-nodes/private-nodes/join) to join the instance as a node in the vCluster. |
| 56 | + |
| 57 | +Create the vCluster with volume snapshot support enabled: |
| 58 | + |
| 59 | +```bash title="Create vCluster with private nodes" |
| 60 | +vcluster create myvcluster --values vcluster.yaml |
| 61 | +``` |
| 62 | + |
| 63 | +```yaml title="vcluster.yaml" |
| 64 | +pro: true |
| 65 | +privateNodes: |
| 66 | + enabled: true |
| 67 | + autoUpgrade: |
| 68 | + imagePullPolicy: Never |
| 69 | +controlPlane: |
| 70 | + service: |
| 71 | + spec: |
| 72 | + type: LoadBalancer |
| 73 | +networking: |
| 74 | + podCIDR: 10.64.0.0/16 |
| 75 | + serviceCIDR: 10.128.0.0/16 |
| 76 | +deploy: |
| 77 | + volumeSnapshotController: |
| 78 | + enabled: true |
| 79 | +``` |
| 80 | +
|
| 81 | +</TabItem> |
| 82 | +</Tabs> |
| 83 | +
|
| 84 | +## Deploy a demo application |
| 85 | +
|
| 86 | +Deploy a sample application in the vCluster. This application writes the current date and time in five-second intervals to a file called `out.txt` on a persistent volume. |
| 87 | + |
| 88 | +```bash title="Deploy application with persistent storage" |
| 89 | +cat <<EOF | kubectl apply -f - |
| 90 | +apiVersion: v1 |
| 91 | +kind: PersistentVolumeClaim |
| 92 | +metadata: |
| 93 | + name: ebs-claim |
| 94 | +spec: |
| 95 | + accessModes: |
| 96 | + - ReadWriteOnce |
| 97 | + resources: |
| 98 | + requests: |
| 99 | + storage: 4Gi |
| 100 | +--- |
| 101 | +apiVersion: v1 |
| 102 | +kind: Pod |
| 103 | +metadata: |
| 104 | + name: app |
| 105 | +spec: |
| 106 | + containers: |
| 107 | + - name: app |
| 108 | + image: public.ecr.aws/amazonlinux/amazonlinux |
| 109 | + command: ["/bin/sh"] |
| 110 | + args: ["-c", "while true; do date -u >> /data/out.txt; sleep 5; done"] |
| 111 | + volumeMounts: |
| 112 | + - name: persistent-storage |
| 113 | + mountPath: /data |
| 114 | + volumes: |
| 115 | + - name: persistent-storage |
| 116 | + persistentVolumeClaim: |
| 117 | + claimName: ebs-claim |
| 118 | +EOF |
| 119 | +``` |
| 120 | + |
| 121 | +### Verify the application |
| 122 | + |
| 123 | +Wait until the pod is running and the PVC is in `Bound` state: |
| 124 | + |
| 125 | +```bash title="Check pod status" |
| 126 | +kubectl get pods |
| 127 | +``` |
| 128 | + |
| 129 | +Expected output: |
| 130 | +``` |
| 131 | +NAME READY STATUS RESTARTS AGE |
| 132 | +app 1/1 Running 0 37s |
| 133 | +``` |
| 134 | + |
| 135 | +```bash title="Check PVC status" |
| 136 | +kubectl get pvc |
| 137 | +``` |
| 138 | + |
| 139 | +Expected output: |
| 140 | +``` |
| 141 | +NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS VOLUMEATTRIBUTESCLASS AGE |
| 142 | +ebs-claim Bound pvc-4062a395-e84e-4efd-91c4-8e09cb12d3a8 4Gi RWO <unset> 42s |
| 143 | +``` |
| 144 | + |
| 145 | +Verify that data is being written to the persistent volume: |
| 146 | + |
| 147 | +```bash title="View application data" |
| 148 | +kubectl exec -it app -- cat /data/out.txt | tail -n 3 |
| 149 | +``` |
| 150 | + |
| 151 | +Expected output: |
| 152 | +``` |
| 153 | +Tue Oct 28 13:38:41 UTC 2025 |
| 154 | +Tue Oct 28 13:38:46 UTC 2025 |
| 155 | +Tue Oct 28 13:38:51 UTC 2025 |
| 156 | +``` |
| 157 | + |
| 158 | + |
| 159 | +## Create snapshot with volumes |
| 160 | + |
| 161 | +Create a vCluster snapshot with volume snapshots included by using the `--include-volumes` parameter. The vCluster CLI creates a snapshot request in the host cluster, which is then processed in the background by the vCluster snapshot controller. |
| 162 | + |
| 163 | +Disconnect from the vCluster: |
| 164 | + |
| 165 | +```bash title="Disconnect from vCluster" |
| 166 | +vcluster disconnect |
| 167 | +``` |
| 168 | + |
| 169 | +Create the snapshot: |
| 170 | + |
| 171 | +```bash title="Create snapshot with volumes" |
| 172 | +vcluster snapshot create myvcluster "oci://ghcr.io/my-user/my-repo:my-tag" --include-volumes |
| 173 | +``` |
| 174 | + |
| 175 | +Expected output: |
| 176 | +``` |
| 177 | +18:01:13 info Beginning snapshot creation... Check the snapshot status by running `vcluster snapshot get myvcluster oci://ghcr.io/my-user/my-repo:my-tag` |
| 178 | +``` |
| 179 | + |
| 180 | +:::note |
| 181 | +Replace `oci://ghcr.io/my-user/my-repo:my-tag` with your own OCI registry or other storage location. Ensure you have the necessary authentication configured for it. |
| 182 | +::: |
| 183 | + |
| 184 | +### Check snapshot status |
| 185 | + |
| 186 | +Monitor the snapshot creation progress: |
| 187 | + |
| 188 | +```bash title="Check snapshot status" |
| 189 | +vcluster snapshot get myvcluster "oci://ghcr.io/my-user/my-repo:my-tag" |
| 190 | +``` |
| 191 | + |
| 192 | +Expected output: |
| 193 | +``` |
| 194 | + SNAPSHOT | VOLUMES | SAVED | STATUS | AGE |
| 195 | + -----------------------------------------+---------+-------+-----------+-------- |
| 196 | + oci://ghcr.io/my-user/my-repo:my-tag | 1/1 | Yes | Completed | 2m51s |
| 197 | +``` |
| 198 | + |
| 199 | +Wait until the status shows `Completed` and `SAVED` shows `Yes` before proceeding to the restore step. |
| 200 | + |
| 201 | +## Simulate data loss |
| 202 | + |
| 203 | +To demonstrate the restore capability, delete the application and its data from the virtual cluster. First, connect to the vCluster: |
| 204 | + |
| 205 | +```bash title="Connect to vCluster" |
| 206 | +vcluster connect myvcluster |
| 207 | +``` |
| 208 | + |
| 209 | +Delete the application and PVC: |
| 210 | + |
| 211 | +```bash title="Delete application and PVC" |
| 212 | +cat <<EOF | kubectl delete -f - |
| 213 | +apiVersion: v1 |
| 214 | +kind: PersistentVolumeClaim |
| 215 | +metadata: |
| 216 | + name: ebs-claim |
| 217 | +spec: |
| 218 | + accessModes: |
| 219 | + - ReadWriteOnce |
| 220 | + resources: |
| 221 | + requests: |
| 222 | + storage: 4Gi |
| 223 | +--- |
| 224 | +apiVersion: v1 |
| 225 | +kind: Pod |
| 226 | +metadata: |
| 227 | + name: app |
| 228 | +spec: |
| 229 | + containers: |
| 230 | + - name: app |
| 231 | + image: public.ecr.aws/amazonlinux/amazonlinux |
| 232 | + command: ["/bin/sh"] |
| 233 | + args: ["-c", "while true; do date -u >> /data/out.txt; sleep 5; done"] |
| 234 | + volumeMounts: |
| 235 | + - name: persistent-storage |
| 236 | + mountPath: /data |
| 237 | + volumes: |
| 238 | + - name: persistent-storage |
| 239 | + persistentVolumeClaim: |
| 240 | + claimName: ebs-claim |
| 241 | +EOF |
| 242 | +``` |
| 243 | + |
| 244 | +## Restore from snapshot |
| 245 | + |
| 246 | +Restore the vCluster from the snapshot, including the volume data. First, disconnect from the vCluster: |
| 247 | + |
| 248 | +```bash title="Disconnect from vCluster" |
| 249 | +vcluster disconnect |
| 250 | +``` |
| 251 | + |
| 252 | +Run the restore command with the `--restore-volumes` parameter. This creates a restore request which is processed by the restore controller, orchestrating the restoration of the PVC from the snapshots: |
| 253 | + |
| 254 | +```bash title="Restore vCluster with volumes" |
| 255 | +vcluster restore myvcluster "oci://ghcr.io/my-user/my-repo:my-tag" --restore-volumes |
| 256 | +``` |
| 257 | + |
| 258 | +Expected output: |
| 259 | +``` |
| 260 | +17:39:14 info Pausing vCluster myvcluster |
| 261 | +17:39:15 info Scale down statefulSet vcluster-myvcluster/myvcluster... |
| 262 | +17:39:17 info Starting snapshot pod for vCluster vcluster-myvcluster/myvcluster... |
| 263 | +... |
| 264 | +2025-10-27 12:09:35 INFO snapshot/restoreclient.go:260 Successfully restored snapshot from oci://ghcr.io/my-user/my-repo:my-tag {"component": "vcluster"} |
| 265 | +17:39:37 info Resuming vCluster myvcluster after it was paused |
| 266 | +``` |
| 267 | + |
| 268 | +### Verify the restore |
| 269 | + |
| 270 | +Once the vCluster is running again, connect to it and verify that the pod and PVC have been restored: |
| 271 | + |
| 272 | +```bash title="Connect to vCluster" |
| 273 | +vcluster connect myvcluster |
| 274 | +``` |
| 275 | + |
| 276 | +Check that the pod is running: |
| 277 | + |
| 278 | +```bash title="Check pod status" |
| 279 | +kubectl get pods |
| 280 | +``` |
| 281 | + |
| 282 | +Expected output: |
| 283 | +``` |
| 284 | +NAME READY STATUS RESTARTS AGE |
| 285 | +app 1/1 Running 0 12m |
| 286 | +``` |
| 287 | + |
| 288 | +Check that the PVC is bound: |
| 289 | + |
| 290 | +```bash title="Check PVC status" |
| 291 | +kubectl get pvc |
| 292 | +``` |
| 293 | + |
| 294 | +Expected output: |
| 295 | +``` |
| 296 | +NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS VOLUMEATTRIBUTESCLASS AGE |
| 297 | +ebs-claim Bound pvc-c6ebf439-9fe5-4413-9f86-89916c1e4e49 4Gi RWO <unset> 12m |
| 298 | +``` |
| 299 | + |
| 300 | +Verify that the data was successfully restored by checking the log file: |
| 301 | + |
| 302 | +```bash title="Verify restored data" |
| 303 | +kubectl exec -it app -- cat /data/out.txt |
| 304 | +``` |
| 305 | + |
| 306 | +Expected output (showing both old and new timestamps): |
| 307 | +``` |
| 308 | +... |
| 309 | +Tue Oct 28 13:39:21 UTC 2025 |
| 310 | +Tue Oct 28 13:39:26 UTC 2025 |
| 311 | +Tue Oct 28 13:39:31 UTC 2025 |
| 312 | +Tue Oct 28 13:46:10 UTC 2025 |
| 313 | +Tue Oct 28 13:46:15 UTC 2025 |
| 314 | +Tue Oct 28 13:46:20 UTC 2025 |
| 315 | +``` |
| 316 | + |
| 317 | +Notice the gap in timestamps. The earlier timestamps (around 13:39) are from before the deletion, while the later timestamps (13:46) are after the restore. This confirms that the data was successfully recovered from the snapshot, and the application resumed writing new entries. |
| 318 | + |
| 319 | +## Cleanup |
| 320 | + |
| 321 | +To remove the resources created in this tutorial: |
| 322 | + |
| 323 | +Delete the vCluster: |
| 324 | + |
| 325 | +```bash title="Delete vCluster" |
| 326 | +vcluster delete myvcluster |
| 327 | +``` |
| 328 | + |
| 329 | +If you created an EKS cluster specifically for this tutorial, you can delete it to avoid ongoing charges: |
| 330 | + |
| 331 | +```bash title="Delete EKS cluster" |
| 332 | +eksctl delete cluster -f cluster.yaml --disable-nodegroup-eviction |
| 333 | +``` |
0 commit comments