Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 2 additions & 2 deletions docs/clustering.md
Original file line number Diff line number Diff line change
Expand Up @@ -1163,8 +1163,8 @@ rabbitmqctl forget_cluster_node -n rabbit@rabbit1 rabbit@rabbit2

### What Happens to Quorum Queue and Stream Replicas?

When a node is removed from the cluster using CLI tools, all [quorum queue](./quorum-queues#replica-management)
and [stream replicas](./streams#replica-management) on the node will be removed,
When a node is removed from the cluster using CLI tools, all [quorum queue](./quorum-queues#member-management)
and [stream replicas](./streams#member-management) on the node will be removed,
even if that means that queues and streams would temporarily have an even (e.g. two) replicas.

### Node Removal is Explicit (Manual) or Opt-in
Expand Down
136 changes: 69 additions & 67 deletions docs/quorum-queues/index.md

Large diffs are not rendered by default.

4 changes: 2 additions & 2 deletions docs/streams.md
Original file line number Diff line number Diff line change
Expand Up @@ -459,7 +459,7 @@ be since more work has to be done to replicate data and achieve consensus.
The `x-initial-cluster-size` queue argument controls how many rabbit nodes the initial
stream cluster should span.

### Managing Stream Replicas {#replica-management}
### Managing Stream Replicas {#member-management}

Replicas of a stream are explicitly managed by the operator. When a new node is added
to the cluster, it will host no stream replicas unless the operator explicitly adds it
Expand Down Expand Up @@ -537,7 +537,7 @@ will be transferred if a re-joining replica is behind the leader. This "catching
does not affect leader availability.

Replicas must be explicitly added.
When a new replica is [added](#replica-management), it will synchronise the entire stream state
When a new replica is [added](#member-management), it will synchronise the entire stream state
from the leader, similarly to newly added quorum queue replicas.

### Fault Tolerance and Minimum Number of Replicas Online {#quorum-requirements}
Expand Down
4 changes: 2 additions & 2 deletions versioned_docs/version-3.13/clustering.md
Original file line number Diff line number Diff line change
Expand Up @@ -1163,8 +1163,8 @@ rabbitmqctl forget_cluster_node -n rabbit@rabbit1 rabbit@rabbit2

### What Happens to Quorum Queue and Stream Replicas?

When a node is removed from the cluster using CLI tools, all [quorum queue](./quorum-queues#replica-management)
and [stream replicas](./streams#replica-management) on the node will be removed,
When a node is removed from the cluster using CLI tools, all [quorum queue](./quorum-queues#member-management)
and [stream replicas](./streams#member-management) on the node will be removed,
even if that means that queues and streams would temporarily have an even (e.g. two) replicas.

### Node Removal is Explicit (Manual) or Opt-in
Expand Down
14 changes: 7 additions & 7 deletions versioned_docs/version-3.13/quorum-queues/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -57,7 +57,7 @@ Topics covered in this information include:
* [How are they different](#feature-comparison) from classic queues
* Primary [use cases](#use-cases) of quorum queues and when not to use them
* How to [declare a quorum queue](#usage)
* [Replication](#replication)-related topics: [replica management](#replica-management), [replica leader rebalancing](#replica-rebalancing), optimal number of replicas, etc
* [Replication](#replication)-related topics: [replica management](#member-management), [replica leader rebalancing](#replica-rebalancing), optimal number of replicas, etc
* What guarantees quorum queues offer in terms of [leader failure handling](#leader-election), [data safety](#data-safety) and [availability](#availability)
* Continuous [Membership Reconciliation](#replica-reconciliation)
* The additional [dead lettering](#dead-lettering) features supported by quorum queues
Expand Down Expand Up @@ -453,7 +453,7 @@ launched to run on a random subset of RabbitMQ nodes present in the cluster at d
In case a quorum queue is declared before all cluster nodes have joined the cluster, and the initial replica
count is greater than the total number of cluster members, the effective value used will
be equal to the total number of cluster nodes. When more nodes join the cluster, the replica count
will not be automatically increased but it can be [increased by the operator](#replica-management).
will not be automatically increased but it can be [increased by the operator](#member-management).

### Queue Leader Location {#leader-placement}

Expand Down Expand Up @@ -482,7 +482,7 @@ Supported queue leader locator values are
pick the node hosting the minimum number of quorum queue leaders.
If there are overall more than 1000 queues, pick a random node.

### Managing Replicas {#replica-management}
### Managing Replicas {#member-management}

Replicas of a quorum queue are explicitly managed by the operator. When a new node is added
to the cluster, it will host no quorum queue replicas unless the operator explicitly adds it
Expand Down Expand Up @@ -522,7 +522,7 @@ it replaces.

Once declared, the RabbitMQ quorum queue leaders may be unevenly distributed across the RabbitMQ cluster.
To re-balance use the `rabbitmq-queues rebalance`
command. It is important to know that this does not change the nodes which the quorum queues span. To modify the membership instead see [managing replicas](#replica-management).
command. It is important to know that this does not change the nodes which the quorum queues span. To modify the membership instead see [managing replicas](#member-management).

```bash
# rebalances all quorum queues
Expand All @@ -547,11 +547,11 @@ rabbitmq-queues rebalance quorum --vhost-pattern "production.*"

:::important
The continuous membership reconciliation (CMR) feature exists in addition to, and not as a replacement for,
[explicit replica management](#replica-management). In certain cases where nodes are permanently removed
[explicit replica management](#member-management). In certain cases where nodes are permanently removed
from the cluster, explicitly removing quorum queue replicas may still be necessary.
:::

In addition to controlling quorum queue replica membership by using the initial target size and [explicit replica management](#replica-management),
In addition to controlling quorum queue replica membership by using the initial target size and [explicit replica management](#member-management),
nodes can be configured to automatically try to grow the quorum queue replica membership
to a configured target replica number (group size) by enabling the continuous membership reconciliation feature.

Expand Down Expand Up @@ -745,7 +745,7 @@ will be transferred if a re-joining replica is behind the leader. This "catching
does not affect leader availability.

Except for the initial replica set selection, replicas must be explicitly added to a quorum queue.
When a new replica is [added](#replica-management), it will synchronise the entire queue state
When a new replica is [added](#member-management), it will synchronise the entire queue state
from the leader, similarly to classic mirrored queues.

### Fault Tolerance and Minimum Number of Replicas Online {#quorum-requirements}
Expand Down
4 changes: 2 additions & 2 deletions versioned_docs/version-3.13/streams.md
Original file line number Diff line number Diff line change
Expand Up @@ -461,7 +461,7 @@ be since more work has to be done to replicate data and achieve consensus.
The `x-initial-cluster-size` queue argument controls how many rabbit nodes the initial
stream cluster should span.

### Managing Stream Replicas {#replica-management}
### Managing Stream Replicas {#member-management}

Replicas of a stream are explicitly managed by the operator. When a new node is added
to the cluster, it will host no stream replicas unless the operator explicitly adds it
Expand Down Expand Up @@ -539,7 +539,7 @@ will be transferred if a re-joining replica is behind the leader. This "catching
does not affect leader availability.

Replicas must be explicitly added.
When a new replica is [added](#replica-management), it will synchronise the entire stream state
When a new replica is [added](#member-management), it will synchronise the entire stream state
from the leader, similarly to newly added quorum queue replicas.

### Fault Tolerance and Minimum Number of Replicas Online {#quorum-requirements}
Expand Down
4 changes: 2 additions & 2 deletions versioned_docs/version-4.0/clustering.md
Original file line number Diff line number Diff line change
Expand Up @@ -1163,8 +1163,8 @@ rabbitmqctl forget_cluster_node -n rabbit@rabbit1 rabbit@rabbit2

### What Happens to Quorum Queue and Stream Replicas?

When a node is removed from the cluster using CLI tools, all [quorum queue](./quorum-queues#replica-management)
and [stream replicas](./streams#replica-management) on the node will be removed,
When a node is removed from the cluster using CLI tools, all [quorum queue](./quorum-queues#member-management)
and [stream replicas](./streams#member-management) on the node will be removed,
even if that means that queues and streams would temporarily have an even (e.g. two) replicas.

### Node Removal is Explicit (Manual) or Opt-in
Expand Down
20 changes: 10 additions & 10 deletions versioned_docs/version-4.0/quorum-queues/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -61,7 +61,7 @@ Topics covered in this document include:
* [How are they different](#feature-comparison) from classic queues
* Primary [use cases](#use-cases) of quorum queues and when not to use them
* How to [declare a quorum queue](#usage)
* [Replication](#replication)-related topics: [replica management](#replica-management), [replica leader rebalancing](#replica-rebalancing), optimal number of replicas, etc
* [Replication](#replication)-related topics: [replica management](#member-management), [replica leader rebalancing](#replica-rebalancing), optimal number of replicas, etc
* What guarantees quorum queues offer in terms of [leader failure handling](#leader-election), [data safety](#data-safety) and [availability](#availability)
* Continuous [Membership Reconciliation](#replica-reconciliation)
* The additional [dead lettering](#dead-lettering) features supported by quorum queues
Expand Down Expand Up @@ -676,9 +676,9 @@ launched to run on a random subset of RabbitMQ nodes present in the cluster at d
In case a quorum queue is declared before all cluster nodes have joined the cluster, and the initial replica
count is greater than the total number of cluster members, the effective value used will
be equal to the total number of cluster nodes. When more nodes join the cluster, the replica count
will not be automatically increased but it can be [increased by the operator](#replica-management).
will not be automatically increased but it can be [increased by the operator](#member-management).

### Managing Replicas {#replica-management}
### Managing Replicas {#member-management}

Replicas of a quorum queue are explicitly managed by the operator. When a new node is added
to the cluster, it will host no quorum queue replicas unless the operator explicitly adds it
Expand Down Expand Up @@ -753,7 +753,7 @@ Once declared, the RabbitMQ quorum queue leaders may be unevenly
distributed across the RabbitMQ cluster.
To re-balance use the `rabbitmq-queues rebalance` command.
It is important to know that this does not change the nodes which the quorum queues span.
To modify the membership instead see [managing replicas](#replica-management).
To modify the membership instead see [managing replicas](#member-management).

```bash
# rebalances all quorum queues
Expand All @@ -778,18 +778,18 @@ rabbitmq-queues rebalance quorum --vhost-pattern "production.*"

:::important
The continuous membership reconciliation (CMR) feature exists in addition to, and not as a replacement for,
[explicit replica management](#replica-management). In certain cases where nodes are permanently removed
[explicit replica management](#member-management). In certain cases where nodes are permanently removed
from the cluster, explicitly removing quorum queue replicas may still be necessary.
:::

In addition to controlling quorum queue replica membership by using the initial target size and [explicit replica management](#replica-management),
In addition to controlling quorum queue replica membership by using the initial target size and [explicit replica management](#member-management),
nodes can be configured to automatically try to grow the quorum queue replica membership
to a configured target group size by enabling the continuous membership reconciliation feature.

When activated, every quorum queue leader replica will periodically check its current membership group size
(the number of replicas online), and compare it with the target value.
(the number of configured replicas), and compare it with the target value.

If a queue is below the target value, RabbitMQ will attempt to grow the queue onto the availible nodes that
If a queue is below the target value, RabbitMQ will attempt to grow the queue onto the available nodes that
do not currently host replicas of said queue, if any, up to the target value.

#### When is Continuous Membership Reconciliation Triggered?
Expand All @@ -810,7 +810,7 @@ are expected to come back and only a minority (often just one) node is stopped f

#### CMR Configuration

##### `rabbitmq.conf`
##### Via `rabbitmq.conf`

<table class="name-description">
<caption>Continuous Membership Reconciliation (CMR) Settings</caption>
Expand Down Expand Up @@ -973,7 +973,7 @@ does not require a full re-synchronization from the currently elected leader. On
will be transferred if a re-joining replica is behind the leader. This "catching up" process
does not affect leader availability.

When a new replica is [added](#replica-management), it will synchronise the entire queue state
When a new replica is [added](#member-management), it will synchronise the entire queue state
from the leader.

### Fault Tolerance and Minimum Number of Replicas Online {#quorum-requirements}
Expand Down
4 changes: 2 additions & 2 deletions versioned_docs/version-4.0/streams.md
Original file line number Diff line number Diff line change
Expand Up @@ -459,7 +459,7 @@ be since more work has to be done to replicate data and achieve consensus.
The `x-initial-cluster-size` queue argument controls how many rabbit nodes the initial
stream cluster should span.

### Managing Stream Replicas {#replica-management}
### Managing Stream Replicas {#member-management}

Replicas of a stream are explicitly managed by the operator. When a new node is added
to the cluster, it will host no stream replicas unless the operator explicitly adds it
Expand Down Expand Up @@ -537,7 +537,7 @@ will be transferred if a re-joining replica is behind the leader. This "catching
does not affect leader availability.

Replicas must be explicitly added.
When a new replica is [added](#replica-management), it will synchronise the entire stream state
When a new replica is [added](#member-management), it will synchronise the entire stream state
from the leader, similarly to newly added quorum queue replicas.

### Fault Tolerance and Minimum Number of Replicas Online {#quorum-requirements}
Expand Down
4 changes: 2 additions & 2 deletions versioned_docs/version-4.1/clustering.md
Original file line number Diff line number Diff line change
Expand Up @@ -1163,8 +1163,8 @@ rabbitmqctl forget_cluster_node -n rabbit@rabbit1 rabbit@rabbit2

### What Happens to Quorum Queue and Stream Replicas?

When a node is removed from the cluster using CLI tools, all [quorum queue](./quorum-queues#replica-management)
and [stream replicas](./streams#replica-management) on the node will be removed,
When a node is removed from the cluster using CLI tools, all [quorum queue](./quorum-queues#member-management)
and [stream replicas](./streams#member-management) on the node will be removed,
even if that means that queues and streams would temporarily have an even (e.g. two) replicas.

### Node Removal is Explicit (Manual) or Opt-in
Expand Down
20 changes: 10 additions & 10 deletions versioned_docs/version-4.1/quorum-queues/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -61,7 +61,7 @@ Topics covered in this document include:
* [How are they different](#feature-comparison) from classic queues
* Primary [use cases](#use-cases) of quorum queues and when not to use them
* How to [declare a quorum queue](#usage)
* [Replication](#replication)-related topics: [replica management](#replica-management), [replica leader rebalancing](#replica-rebalancing), optimal number of replicas, etc
* [Replication](#replication)-related topics: [replica management](#member-management), [replica leader rebalancing](#replica-rebalancing), optimal number of replicas, etc
* What guarantees quorum queues offer in terms of [leader failure handling](#leader-election), [data safety](#data-safety) and [availability](#availability)
* Continuous [Membership Reconciliation](#replica-reconciliation)
* The additional [dead lettering](#dead-lettering) features supported by quorum queues
Expand Down Expand Up @@ -676,9 +676,9 @@ launched to run on a random subset of RabbitMQ nodes present in the cluster at d
In case a quorum queue is declared before all cluster nodes have joined the cluster, and the initial replica
count is greater than the total number of cluster members, the effective value used will
be equal to the total number of cluster nodes. When more nodes join the cluster, the replica count
will not be automatically increased but it can be [increased by the operator](#replica-management).
will not be automatically increased but it can be [increased by the operator](#member-management).

### Managing Replicas {#replica-management}
### Managing Replicas {#member-management}

Replicas of a quorum queue are explicitly managed by the operator. When a new node is added
to the cluster, it will host no quorum queue replicas unless the operator explicitly adds it
Expand Down Expand Up @@ -753,7 +753,7 @@ Once declared, the RabbitMQ quorum queue leaders may be unevenly
distributed across the RabbitMQ cluster.
To re-balance use the `rabbitmq-queues rebalance` command.
It is important to know that this does not change the nodes which the quorum queues span.
To modify the membership instead see [managing replicas](#replica-management).
To modify the membership instead see [managing replicas](#member-management).

```bash
# rebalances all quorum queues
Expand All @@ -778,18 +778,18 @@ rabbitmq-queues rebalance quorum --vhost-pattern "production.*"

:::important
The continuous membership reconciliation (CMR) feature exists in addition to, and not as a replacement for,
[explicit replica management](#replica-management). In certain cases where nodes are permanently removed
[explicit replica management](#member-management). In certain cases where nodes are permanently removed
from the cluster, explicitly removing quorum queue replicas may still be necessary.
:::

In addition to controlling quorum queue replica membership by using the initial target size and [explicit replica management](#replica-management),
In addition to controlling quorum queue replica membership by using the initial target size and [explicit replica management](#member-management),
nodes can be configured to automatically try to grow the quorum queue replica membership
to a configured target group size by enabling the continuous membership reconciliation feature.

When activated, every quorum queue leader replica will periodically check its current membership group size
(the number of replicas online), and compare it with the target value.
(the number of configured replicas), and compare it with the target value.

If a queue is below the target value, RabbitMQ will attempt to grow the queue onto the availible nodes that
If a queue is below the target value, RabbitMQ will attempt to grow the queue onto the available nodes that
do not currently host replicas of said queue, if any, up to the target value.

#### When is Continuous Membership Reconciliation Triggered?
Expand All @@ -810,7 +810,7 @@ are expected to come back and only a minority (often just one) node is stopped f

#### CMR Configuration

##### `rabbitmq.conf`
##### Via `rabbitmq.conf`

<table class="name-description">
<caption>Continuous Membership Reconciliation (CMR) Settings</caption>
Expand Down Expand Up @@ -973,7 +973,7 @@ does not require a full re-synchronization from the currently elected leader. On
will be transferred if a re-joining replica is behind the leader. This "catching up" process
does not affect leader availability.

When a new replica is [added](#replica-management), it will synchronise the entire queue state
When a new replica is [added](#member-management), it will synchronise the entire queue state
from the leader.

### Fault Tolerance and Minimum Number of Replicas Online {#quorum-requirements}
Expand Down
4 changes: 2 additions & 2 deletions versioned_docs/version-4.1/streams.md
Original file line number Diff line number Diff line change
Expand Up @@ -459,7 +459,7 @@ be since more work has to be done to replicate data and achieve consensus.
The `x-initial-cluster-size` queue argument controls how many rabbit nodes the initial
stream cluster should span.

### Managing Stream Replicas {#replica-management}
### Managing Stream Replicas {#member-management}

Replicas of a stream are explicitly managed by the operator. When a new node is added
to the cluster, it will host no stream replicas unless the operator explicitly adds it
Expand Down Expand Up @@ -537,7 +537,7 @@ will be transferred if a re-joining replica is behind the leader. This "catching
does not affect leader availability.

Replicas must be explicitly added.
When a new replica is [added](#replica-management), it will synchronise the entire stream state
When a new replica is [added](#member-management), it will synchronise the entire stream state
from the leader, similarly to newly added quorum queue replicas.

### Fault Tolerance and Minimum Number of Replicas Online {#quorum-requirements}
Expand Down