You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository was archived by the owner on Nov 12, 2025. It is now read-only.
Add snapshot configuration links and examples to cluster pages
We want to steer users towards always using a snapshot repository.
- expand Docker Compose example with Minio
- update all cluster references to link to the snapshots page
- add strong recommendation to always use snapshots for clusters
Copy file name to clipboardExpand all lines: docs/deploy/server/cluster/deployment.mdx
+5-1Lines changed: 5 additions & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -11,7 +11,7 @@ import Admonition from '@theme/Admonition';
11
11
This page describes how you can deploy a distributed Restate cluster.
12
12
13
13
<Admonitiontype="tip"title="Quickstart using Docker">
14
-
Check out the [Restate cluster guide](/guides/cluster) for a docker-compose ready-made example.
14
+
Check out the [Restate cluster guide](/guides/cluster) for a Docker Compose ready-made example.
15
15
</Admonition>
16
16
17
17
<Admonitiontype="tip"title="Migrating an existing single-node deployment">
@@ -24,6 +24,10 @@ This page describes how you can deploy a distributed Restate cluster.
24
24
To understand the terminology used on this page, it might be helpful to read through the [architecture reference](/references/architecture).
25
25
</Admonition>
26
26
27
+
<Admonitiontype="caution">
28
+
Snapshots are essential to support safe log trimming and also allow you to set partition replication to a subset of all cluster nodes, while still allowing for fast partition fail-over to to any live node. Snapshots are also necessary to add more nodes in the future.
29
+
</Admonition>
30
+
27
31
To deploy a distributed Restate cluster without external dependencies, you need to configure the following settings in your [server configuration](/operate/configuration/server):
# Ensure a bucket called "restate" exists on startup:
97
+
command: "-c 'mkdir -p /data/restate && /usr/bin/minio server --quiet /data'"
98
+
ports:
99
+
- "9000:9000"
94
100
```
95
101
96
-
The cluster uses the `replicated` Bifrost provider and replicates data to 2 nodes.
102
+
The cluster uses the `replicated` Bifrost provider and replicates log writes to a minimum of 2 nodes.
97
103
Since we are running with 3 nodes, the cluster can tolerate 1 node failure without becoming unavailable.
104
+
By default, partition state is replicated to all workers (though each partition has only one acting leader at a time).
98
105
99
106
The `replicated` metadata cluster consists of all nodes since they all run the `metadata-server` role.
100
107
Since the `replicated` metadata cluster requires a majority quorum to operate, the cluster can tolerate 1 node failure without becoming unavailable.
101
108
102
109
Take a look at the [cluster deployment documentation](/deploy/server/cluster/deployment) for more information on how to configure and deploy a distributed Restate cluster.
103
-
110
+
In this example we also deployed a Minio server to host the cluster snapshots bucket. Visit [Snapshots](/operate/snapshots) to learn more about whis is strongly recommended for all clusters.
104
111
</Step>
105
112
106
113
<Step stepLabel="2" title="Check the cluster status">
@@ -143,10 +150,19 @@ Take a look at the [cluster deployment documentation](/deploy/server/cluster/dep
143
150
```
144
151
</Step>
145
152
153
+
<Step stepLabel="7" title="Create snapshots">
154
+
Try instructing the partition processors to create a snapshot of their state in the object store bucket:
Navigate to the Minio console at [http://localhost:9000](http://localhost:9000) and browse the bucket contents (default credentials: `minioadmin`/`minioadmin`).
159
+
</Step>
160
+
146
161
<Step end={true} stepLabel="🎉" title="Congratulations, you managed to run your first distributed Restate cluster and simulated some failures!"/>
147
162
148
163
149
164
Here are some next steps for you to try:
150
165
151
166
- Try to configure a 5 server Restate cluster that can tolerate up to 2 server failures.
167
+
- Trim the logs (either manually, or by setting up automatic trimming) _before_ adding more nodes.
152
168
- Try to deploy a 3 server Restate cluster using Kubernetes.
Copy file name to clipboardExpand all lines: docs/guides/local-to-replicated.mdx
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -28,7 +28,7 @@ Once you restart your Restate server, it will start using the replicated metadat
28
28
type = "replicated"
29
29
```
30
30
31
-
If you plan to extend your single-node deployment to a multi-node deployment, you also need to [configure the snapshot repository](/operate/data-backup#snapshotting).
31
+
If you plan to extend your single-node deployment to a multi-node deployment, you also need to [configure the snapshot repository](/operate/snapshots).
32
32
This allows new nodes to join the cluster by restoring the latest snapshot.
Copy file name to clipboardExpand all lines: docs/operate/snapshots.mdx
+6-2Lines changed: 6 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -11,12 +11,16 @@ import Admonition from '@theme/Admonition';
11
11
This page covers configuring a Restate cluster to share partition snapshots for fast fail-over and bootstrapping new nodes. For backup of Restate nodes, see [Data Backup](/operate/data-backup).
12
12
</Admonition>
13
13
14
-
Restate workers can be configured to periodically publish snapshots of their partition state to a shared destination. Snapshots are not necessarily backups. Rather, snapshots allow nodes that had not previously served a partition to bootstrap a copy of its state. Without snapshots, placing a partition processor on a node that wasn't previously a follower would require the full replay of that partition's log. Replaying the log might take a long time - and is impossible if the log gets trimmed.
To understand the terminology used on this page, it might be helpful to read through the [architecture reference](/references/architecture).
18
16
</Admonition>
19
17
18
+
<Admonitiontype="caution">
19
+
Snapshots are essential to support safe log trimming and also allow you to set partition replication to a subset of all cluster nodes, while still allowing for fast partition fail-over to to any live node. Snapshots are also necessary to add more nodes in the future.
20
+
</Admonition>
21
+
22
+
Restate workers can be configured to periodically publish snapshots of their partition state to a shared destination. Snapshots are not necessarily backups. Rather, snapshots allow nodes that had not previously served a partition to bootstrap a copy of its state. Without snapshots, placing a partition processor on a node that wasn't previously a follower would require the full replay of that partition's log. Replaying the log might take a long time - and is impossible if the log gets trimmed.
23
+
20
24
## Configuring Snapshots
21
25
Restate clusters should always be configured with a snapshot repository to allow nodes to efficiently share partition state, and for new nodes to be added to the cluster in the future.
22
26
Restate currently supports using Amazon S3 (or an API-compatible object store) as a shared snapshot repository.
0 commit comments