From b1de9eb9dca5ded498f5a2b7283b1da8fccaddd9 Mon Sep 17 00:00:00 2001 From: Karl Nilsson Date: Tue, 29 Apr 2025 09:35:57 +0100 Subject: [PATCH 1/3] Fix incorrect QQ CMR documentation To clarify it operates on the number of _configured_ members _not_ the number of _online_ members. Also settle the latest QQ doc version to use the word "member" instead of the ambiguous "replica" --- docs/quorum-queues/index.md | 134 +++++++++--------- .../version-4.0/quorum-queues/index.md | 4 +- .../version-4.1/quorum-queues/index.md | 4 +- 3 files changed, 72 insertions(+), 70 deletions(-) diff --git a/docs/quorum-queues/index.md b/docs/quorum-queues/index.md index 92244017d..2da762297 100644 --- a/docs/quorum-queues/index.md +++ b/docs/quorum-queues/index.md @@ -61,9 +61,9 @@ Topics covered in this document include: * [How are they different](#feature-comparison) from classic queues * Primary [use cases](#use-cases) of quorum queues and when not to use them * How to [declare a quorum queue](#usage) - * [Replication](#replication)-related topics: [replica management](#replica-management), [replica leader rebalancing](#replica-rebalancing), optimal number of replicas, etc + * [Replication](#replication)-related topics: [member management](#member-management), [leader rebalancing](#leader-rebalancing), optimal number of members, etc * What guarantees quorum queues offer in terms of [leader failure handling](#leader-election), [data safety](#data-safety) and [availability](#availability) - * Continuous [Membership Reconciliation](#replica-reconciliation) + * Continuous [Membership Reconciliation](#member-reconciliation) * The additional [dead lettering](#dead-lettering) features supported by quorum queues * [Memory and disk footprint](#resource-use) of quorum queues * [Performance](#performance) characteristics of quorum queues @@ -93,7 +93,7 @@ be defined as an agreement between the majority of nodes (`(N/2)+1` where `N` is system participants). When applied to queue mirroring in RabbitMQ [clusters](./clustering) -this means that the majority of replicas (including the currently elected queue leader) +this means that the majority of members (including the currently elected queue leader) agree on the state of the queue and its contents. @@ -112,7 +112,7 @@ not at all from use of quorum queues. Publishers should [use publisher confirms](./publishers#data-safety) as this is how clients can interact with the quorum queue consensus system. Publisher confirms will [only be issued](./confirms#when-publishes-are-confirmed) once -a published message has been successfully replicated to a quorum of replicas +a published message has been successfully replicated to a quorum of members and is considered "safe" within the context of the queue. Consumers should use [manual acknowledgements](./confirms) to ensure messages that aren't @@ -158,7 +158,7 @@ With some queue operations there are minor differences: | Message replication | no | yes | | [Exclusivity](./queues) | yes | no | | Per message persistence | per message | always | -| Membership changes | no | [semi-automatic](#replica-reconciliation) | +| Membership changes | no | [semi-automatic](#member-reconciliation) | | [Message TTL (Time-To-Live)](./ttl) | yes | yes | | [Queue TTL](./ttl#queue-ttl) | yes | partially (lease is not renewed on queue re-declaration) | | [Queue length limits](./maxlength) | yes | yes (except `x-overflow`: `reject-publish-dlx`) | @@ -611,11 +611,11 @@ This is because policy definition or applicable policy can be changed dynamicall queue type cannot. It must be specified at the time of declaration. Declaring a queue with an `x-queue-type` argument set to `quorum` will declare a quorum queue with -up to three replicas (default [replication factor](#replication-factor)), +up to three members (default [Initial Group Size](#replication-factor)), one per each [cluster node](./clustering). -For example, a cluster of three nodes will have three replicas, one on each node. -In a cluster of five nodes, three nodes will have one replica each but two nodes won't host any replicas. +For example, a cluster of three nodes will have three members, one on each node. +In a cluster of five nodes, three nodes will have one member each but two nodes won't host any members. After declaration a quorum queue can be bound to any exchange just as any other RabbitMQ queue. @@ -639,31 +639,31 @@ With some queue operations there are minor differences: * Setting [QoS prefetch](#global-qos) for consumers -## Replication Factor and Membership Management {#replication} +## Initial Group Size and Membership Management {#replication} -When a quorum queue is declared, an initial number of replicas for it must be started in the cluster. -By default the number of replicas to be started is up to three, one per RabbitMQ node in the cluster. +When a quorum queue is declared, an initial number of members for it must be started in the cluster. +By default the number of members to be started is up to three, one per RabbitMQ node in the cluster. -Three nodes is the **practical minimum** of replicas for a quorum queue. In RabbitMQ clusters with a larger -number of nodes, adding more replicas than a [quorum](#what-is-quorum) (majority) will not provide +Three nodes is the **practical minimum** of members for a quorum queue. In RabbitMQ clusters with a larger +number of nodes, adding more members than a [quorum](#what-is-quorum) (majority) will not provide any improvements in terms of [quorum queue availability](#quorum-requirements) but it will consume more cluster resources. -Therefore the **recommended number of replicas** for a quorum queue is the quorum of cluster nodes +Therefore the **recommended number of members** for a quorum queue is the quorum of cluster nodes (but no fewer than three). This assumes a [fully formed](./cluster-formation) cluster of at least three nodes. -### Controlling the Initial Replication Factor {#replication-factor} +### Controlling the Initial Group Size {#replication-factor} -For example, a cluster of three nodes will have three replicas, one on each node. -In a cluster of seven nodes, three nodes will have one replica each but four more nodes won't host any replicas +For example, a cluster of three nodes will have three members, one on each node. +In a cluster of seven nodes, three nodes will have one member each but four more nodes won't host any members of the newly declared queue. -The replication factor (number of replicas a queue has) can be configured for quorum queues. +The group size (number of members a queue has) can be configured for quorum queues. The minimum factor value that makes practical sense is three. It is highly recommended for the factor to be an odd number. This way a clear quorum (majority) of nodes can be computed. For example, there is no "majority" of -nodes in a two node cluster. This is covered with more examples below in the [Fault Tolerance and Minimum Number of Replicas Online](#quorum-requirements) +nodes in a two node cluster. This is covered with more examples below in the [Fault Tolerance and Minimum Number of Members Online](#quorum-requirements) section. This may not be desirable for larger clusters or for cluster with an even number of @@ -673,23 +673,23 @@ group size argument provided should be an integer that is greater than zero and equal to the current RabbitMQ cluster size. The quorum queue will be launched to run on a random subset of RabbitMQ nodes present in the cluster at declaration time. -In case a quorum queue is declared before all cluster nodes have joined the cluster, and the initial replica +In case a quorum queue is declared before all cluster nodes have joined the cluster, and the initial member count is greater than the total number of cluster members, the effective value used will -be equal to the total number of cluster nodes. When more nodes join the cluster, the replica count -will not be automatically increased but it can be [increased by the operator](#replica-management). +be equal to the total number of cluster nodes. When more nodes join the cluster, the member count +will not be automatically increased but it can be [increased by the operator](#member-management). -### Managing Replicas {#replica-management} +### Managing Members {#member-management} -Replicas of a quorum queue are explicitly managed by the operator. When a new node is added -to the cluster, it will host no quorum queue replicas unless the operator explicitly adds it -to a member (replica) list of a quorum queue or a set of quorum queues. +Members of a quorum queue are explicitly managed by the operator. When a new node is added +to the cluster, it will host no quorum queue members unless the operator explicitly adds it +to a member list of a quorum queue or a set of quorum queues. When a node has to be decommissioned (permanently removed from the cluster), the [`forget_cluster_node`](./cli) command will automatically attempt to remove all quorum queue members on the decommissioned node. Alternatively the `shrink` command below can be used ahead of -node removal to move any replicas to a new node. +node removal to move any members to a new node. -Also see [Continuous Membership Reconciliation](#replica-reconciliation) for a +Also see [Continuous Membership Reconciliation](#member-reconciliation) for a more automated way to grow quorum queues. Several [CLI commands](./cli) are provided to perform the above operations: @@ -710,8 +710,9 @@ rabbitmq-queues grow [--vhost-pattern ] [--queue-pa rabbitmq-queues shrink [--errors-only] ``` -To successfully add and remove members a quorum of replicas in the queue must be available -because membership changes are treated as queue state changes. +To successfully add and remove members a quorum needs to already be available +because membership changes are treated as queue state changes and require +consensus. Care needs to be taken not to accidentally make a queue unavailable by losing the quorum whilst performing maintenance operations that involve membership changes. @@ -721,20 +722,20 @@ that need a member on the new node and then decommission the node it replaces. ### Queue Leader Location {#leader-placement} -Every quorum queue has a primary replica. That replica is called +Every quorum queue has a primary member. That member is referred to as the _queue leader_. All queue operations go through the leader first and then are replicated to followers. This is necessary to guarantee FIFO ordering of messages. -To avoid some nodes in a cluster hosting the majority of queue leader -replicas and thus handling most of the load, queue leaders should +To avoid some nodes in a cluster hosting the majority of queue leaders +and thus handling most of the load, queue leaders should be reasonably evenly distributed across cluster nodes. When a new quorum queue is declared, the set of nodes that will host its -replicas is randomly picked, but will always include the node the client that +members is randomly picked, but will always include the node the client that declares the queue is connected to. -Which replica becomes the initial leader can controlled using three options: +Which members is selected as the initial leader can controlled using three options: 1. Setting the `queue-leader-locator` [policy](./policies) key (recommended) 2. By defining the `queue_leader_locator` key in [the configuration file](./configure#configuration-files) (recommended) @@ -743,17 +744,17 @@ Which replica becomes the initial leader can controlled using three options: Supported queue leader locator values are * `client-local`: Pick the node the client that declares the queue is connected to. This is the default value. - * `balanced`: If there are overall less than 1000 queues (classic queues, quorum queues, and streams), + * `balanced`: If there are fewer than 1000 queues overall (classic queues, quorum queues, and streams), pick the node hosting the minimum number of quorum queue leaders. If there are overall more than 1000 queues, pick a random node. -### Rebalancing Replicas {#replica-rebalancing} +### Rebalancing Leaders {#leader-rebalancing} Once declared, the RabbitMQ quorum queue leaders may be unevenly distributed across the RabbitMQ cluster. To re-balance use the `rabbitmq-queues rebalance` command. It is important to know that this does not change the nodes which the quorum queues span. -To modify the membership instead see [managing replicas](#replica-management). +To modify the membership instead see [managing members](#member-management). ```bash # rebalances all quorum queues @@ -774,23 +775,24 @@ or quorum queues in a particular set of virtual hosts: rabbitmq-queues rebalance quorum --vhost-pattern "production.*" ``` -### Continuous Membership Reconciliation (CMR) {#replica-reconciliation} +### Continuous Membership Reconciliation (CMR) {#member-reconciliation} :::important The continuous membership reconciliation (CMR) feature exists in addition to, and not as a replacement for, -[explicit replica management](#replica-management). In certain cases where nodes are permanently removed -from the cluster, explicitly removing quorum queue replicas may still be necessary. +[explicit member management](#member-management). In certain cases where nodes are permanently removed +from the cluster, explicitly removing quorum queue members may still be necessary. ::: -In addition to controlling quorum queue replica membership by using the initial target size and [explicit replica management](#replica-management), -nodes can be configured to automatically try to grow the quorum queue replica membership -to a configured target group size by enabling the continuous membership reconciliation feature. +In addition to controlling quorum queue member membership by using the initial target size +and [explicit member management](#member-management), nodes can be configured to +automatically try to grow the quorum queue member membership to a configured +target group size by enabling the continuous membership reconciliation feature. -When activated, every quorum queue leader replica will periodically check its current membership group size -(the number of replicas online), and compare it with the target value. +When activated, every quorum queue leader member will periodically check its current membership group size +(the number of members that are currently configured), and compare it with the target value. -If a queue is below the target value, RabbitMQ will attempt to grow the queue onto the availible nodes that -do not currently host replicas of said queue, if any, up to the target value. +If a queue is below the target value, RabbitMQ will attempt to grow the queue onto the available nodes that +do not currently host members of said queue, if any, up to the target value. #### When is Continuous Membership Reconciliation Triggered? @@ -799,7 +801,7 @@ certain events in the cluster, such as an addition of a new node, or permanent n or a quorum queue-related policy change. :::warning -Note that a node or quorum queue replica failure does not trigger automatic membership reconciliation. +Note that a node or quorum queue member failure does not trigger automatic membership reconciliation. If a node is failed in an unrecoverable way and cannot be brought back, it must be explicitly removed from the cluster or the operator must opt-in and enable the `quorum_queue.continuous_membership_reconciliation.auto_remove` setting. @@ -839,7 +841,7 @@ are expected to come back and only a minority (often just one) node is stopped f `quorum_queue.continuous_membership_reconciliation.target_group_size` - The target replica count (group size) for queue members. + The target member count (group size) for queue members.

    @@ -914,7 +916,7 @@ are expected to come back and only a minority (often just one) node is stopped f `target-group-size` - Defines the target replica count (group size) for matching queues. This policy can be set by users and operators. + Defines the target member count (group size) for matching queues. This policy can be set by users and operators.

    • Data type: positive integer
    • @@ -937,7 +939,7 @@ are expected to come back and only a minority (often just one) node is stopped f `x-quorum-target-group-size` - Defines the target replica count (group size) for matching queues. This key can be overridden by operator policies. + Defines the target member count (group size) for matching queues. This key can be overridden by operator policies.

      • Data type: positive integer
      • @@ -953,8 +955,8 @@ are expected to come back and only a minority (often just one) node is stopped f A quorum queue relies on a consensus protocol called Raft to ensure data consistency and safety. -Every quorum queue has a primary replica (a *leader* in Raft parlance) and zero or more -secondary replicas (called *followers*). +Every quorum queue has a primary member (a *leader* in Raft parlance) and zero or more +secondary members (called *followers*). A leader is elected when the cluster is first formed and later if the leader becomes unavailable. @@ -964,19 +966,19 @@ becomes unavailable. A quorum queue requires a quorum of the declared nodes to be available to function. When a RabbitMQ node hosting a quorum queue's *leader* fails or is stopped another node hosting one of that -quorum queue's *follower* will be elected leader and resume +quorum queue's *followers* will be elected leader and resume operations. Failed and rejoining followers will re-synchronise ("catch up") with the leader. -With quorum queues, a temporary replica failure +With quorum queues, a temporary member failure does not require a full re-synchronization from the currently elected leader. Only the delta -will be transferred if a re-joining replica is behind the leader. This "catching up" process +will be transferred if a re-joining member is behind the leader. This "catching up" process does not affect leader availability. -When a new replica is [added](#replica-management), it will synchronise the entire queue state +When a new member is [added](#member-management), it will synchronise the entire queue state from the leader. -### Fault Tolerance and Minimum Number of Replicas Online {#quorum-requirements} +### Fault Tolerance and Minimum Number of Members Online {#quorum-requirements} Consensus systems can provide certain guarantees with regard to data safety. These guarantees do mean that certain conditions need to be met before they @@ -1038,17 +1040,17 @@ Note that depending on the [partition handling strategy](./partitions) used RabbitMQ may restart itself during recovery and reset the node but as long as that does not happen, this availability guarantee should hold true. -For example, a queue with three replicas can tolerate one node failure without losing availability. -A queue with five replicas can tolerate two, and so on. +For example, a queue with three members can tolerate one node failure without losing availability. +A queue with five members can tolerate two, and so on. If a quorum of nodes cannot be recovered (say if 2 out of 3 RabbitMQ nodes are permanently lost) the queue is permanently unavailable and will need to be force deleted and recreated. -Quorum queue follower replicas that are disconnected from the leader or participating in a leader +Quorum queue follower members that are disconnected from the leader or participating in a leader election will ignore queue operations sent to it until they become aware of a newly elected leader. There will be warnings in the log (`received unhandled msg` and similar) about such events. -As soon as the replica discovers a newly elected leader, it will sync the queue operation +As soon as the member discovers a newly elected leader, it will sync the queue operation log entries it does not have from the leader, including the dropped ones. Quorum queue state will therefore remain consistent. @@ -1082,9 +1084,9 @@ to be delivered in a timely fashion. Due to the disk I/O-heavy nature of quorum queues, their throughput decreases as message sizes increase. -Quorum queue throughput is also affected by the number of replicas. -The more replicas a quorum queue has, the lower its throughput generally will -be since more work has to be done to replicate data and achieve consensus. +Quorum queue throughput is also affected by the number of members. +The more members a quorum queue has, the lower its throughput generally will +be since more work has to be done to memberte data and achieve consensus. ## Configurable Settings {#configuration} diff --git a/versioned_docs/version-4.0/quorum-queues/index.md b/versioned_docs/version-4.0/quorum-queues/index.md index 24fc68188..06fb91ab2 100644 --- a/versioned_docs/version-4.0/quorum-queues/index.md +++ b/versioned_docs/version-4.0/quorum-queues/index.md @@ -787,9 +787,9 @@ nodes can be configured to automatically try to grow the quorum queue replica me to a configured target group size by enabling the continuous membership reconciliation feature. When activated, every quorum queue leader replica will periodically check its current membership group size -(the number of replicas online), and compare it with the target value. +(the number of configured replicas), and compare it with the target value. -If a queue is below the target value, RabbitMQ will attempt to grow the queue onto the availible nodes that +If a queue is below the target value, RabbitMQ will attempt to grow the queue onto the available nodes that do not currently host replicas of said queue, if any, up to the target value. #### When is Continuous Membership Reconciliation Triggered? diff --git a/versioned_docs/version-4.1/quorum-queues/index.md b/versioned_docs/version-4.1/quorum-queues/index.md index 92244017d..351616ff8 100644 --- a/versioned_docs/version-4.1/quorum-queues/index.md +++ b/versioned_docs/version-4.1/quorum-queues/index.md @@ -787,9 +787,9 @@ nodes can be configured to automatically try to grow the quorum queue replica me to a configured target group size by enabling the continuous membership reconciliation feature. When activated, every quorum queue leader replica will periodically check its current membership group size -(the number of replicas online), and compare it with the target value. +(the number of configured replicas), and compare it with the target value. -If a queue is below the target value, RabbitMQ will attempt to grow the queue onto the availible nodes that +If a queue is below the target value, RabbitMQ will attempt to grow the queue onto the available nodes that do not currently host replicas of said queue, if any, up to the target value. #### When is Continuous Membership Reconciliation Triggered? From c221fc076dd238fb830984e9025f9ff53bf28cec Mon Sep 17 00:00:00 2001 From: Michael Klishin Date: Tue, 29 Apr 2025 22:23:31 -0400 Subject: [PATCH 2/3] Update more anchors as in #2253 --- docs/clustering.md | 4 ++-- docs/streams.md | 4 ++-- versioned_docs/version-3.13/clustering.md | 4 ++-- .../version-3.13/quorum-queues/index.md | 14 +++++++------- versioned_docs/version-3.13/streams.md | 4 ++-- versioned_docs/version-4.0/clustering.md | 4 ++-- .../version-4.0/quorum-queues/index.md | 14 +++++++------- versioned_docs/version-4.0/streams.md | 4 ++-- versioned_docs/version-4.1/clustering.md | 4 ++-- .../version-4.1/quorum-queues/index.md | 16 ++++++++-------- versioned_docs/version-4.1/streams.md | 4 ++-- 11 files changed, 38 insertions(+), 38 deletions(-) diff --git a/docs/clustering.md b/docs/clustering.md index 80bda5daf..4a18392b8 100644 --- a/docs/clustering.md +++ b/docs/clustering.md @@ -1163,8 +1163,8 @@ rabbitmqctl forget_cluster_node -n rabbit@rabbit1 rabbit@rabbit2 ### What Happens to Quorum Queue and Stream Replicas? -When a node is removed from the cluster using CLI tools, all [quorum queue](./quorum-queues#replica-management) -and [stream replicas](./streams#replica-management) on the node will be removed, +When a node is removed from the cluster using CLI tools, all [quorum queue](./quorum-queues#member-management) +and [stream replicas](./streams#member-management) on the node will be removed, even if that means that queues and streams would temporarily have an even (e.g. two) replicas. ### Node Removal is Explicit (Manual) or Opt-in diff --git a/docs/streams.md b/docs/streams.md index 9f2ce3ef9..7d6e80651 100644 --- a/docs/streams.md +++ b/docs/streams.md @@ -459,7 +459,7 @@ be since more work has to be done to replicate data and achieve consensus. The `x-initial-cluster-size` queue argument controls how many rabbit nodes the initial stream cluster should span. -### Managing Stream Replicas {#replica-management} +### Managing Stream Replicas {#member-management} Replicas of a stream are explicitly managed by the operator. When a new node is added to the cluster, it will host no stream replicas unless the operator explicitly adds it @@ -537,7 +537,7 @@ will be transferred if a re-joining replica is behind the leader. This "catching does not affect leader availability. Replicas must be explicitly added. -When a new replica is [added](#replica-management), it will synchronise the entire stream state +When a new replica is [added](#member-management), it will synchronise the entire stream state from the leader, similarly to newly added quorum queue replicas. ### Fault Tolerance and Minimum Number of Replicas Online {#quorum-requirements} diff --git a/versioned_docs/version-3.13/clustering.md b/versioned_docs/version-3.13/clustering.md index 80bda5daf..4a18392b8 100644 --- a/versioned_docs/version-3.13/clustering.md +++ b/versioned_docs/version-3.13/clustering.md @@ -1163,8 +1163,8 @@ rabbitmqctl forget_cluster_node -n rabbit@rabbit1 rabbit@rabbit2 ### What Happens to Quorum Queue and Stream Replicas? -When a node is removed from the cluster using CLI tools, all [quorum queue](./quorum-queues#replica-management) -and [stream replicas](./streams#replica-management) on the node will be removed, +When a node is removed from the cluster using CLI tools, all [quorum queue](./quorum-queues#member-management) +and [stream replicas](./streams#member-management) on the node will be removed, even if that means that queues and streams would temporarily have an even (e.g. two) replicas. ### Node Removal is Explicit (Manual) or Opt-in diff --git a/versioned_docs/version-3.13/quorum-queues/index.md b/versioned_docs/version-3.13/quorum-queues/index.md index a927a1189..9ef17a5b4 100644 --- a/versioned_docs/version-3.13/quorum-queues/index.md +++ b/versioned_docs/version-3.13/quorum-queues/index.md @@ -57,7 +57,7 @@ Topics covered in this information include: * [How are they different](#feature-comparison) from classic queues * Primary [use cases](#use-cases) of quorum queues and when not to use them * How to [declare a quorum queue](#usage) - * [Replication](#replication)-related topics: [replica management](#replica-management), [replica leader rebalancing](#replica-rebalancing), optimal number of replicas, etc + * [Replication](#replication)-related topics: [replica management](#member-management), [replica leader rebalancing](#replica-rebalancing), optimal number of replicas, etc * What guarantees quorum queues offer in terms of [leader failure handling](#leader-election), [data safety](#data-safety) and [availability](#availability) * Continuous [Membership Reconciliation](#replica-reconciliation) * The additional [dead lettering](#dead-lettering) features supported by quorum queues @@ -453,7 +453,7 @@ launched to run on a random subset of RabbitMQ nodes present in the cluster at d In case a quorum queue is declared before all cluster nodes have joined the cluster, and the initial replica count is greater than the total number of cluster members, the effective value used will be equal to the total number of cluster nodes. When more nodes join the cluster, the replica count -will not be automatically increased but it can be [increased by the operator](#replica-management). +will not be automatically increased but it can be [increased by the operator](#member-management). ### Queue Leader Location {#leader-placement} @@ -482,7 +482,7 @@ Supported queue leader locator values are pick the node hosting the minimum number of quorum queue leaders. If there are overall more than 1000 queues, pick a random node. -### Managing Replicas {#replica-management} +### Managing Replicas {#member-management} Replicas of a quorum queue are explicitly managed by the operator. When a new node is added to the cluster, it will host no quorum queue replicas unless the operator explicitly adds it @@ -522,7 +522,7 @@ it replaces. Once declared, the RabbitMQ quorum queue leaders may be unevenly distributed across the RabbitMQ cluster. To re-balance use the `rabbitmq-queues rebalance` -command. It is important to know that this does not change the nodes which the quorum queues span. To modify the membership instead see [managing replicas](#replica-management). +command. It is important to know that this does not change the nodes which the quorum queues span. To modify the membership instead see [managing replicas](#member-management). ```bash # rebalances all quorum queues @@ -547,11 +547,11 @@ rabbitmq-queues rebalance quorum --vhost-pattern "production.*" :::important The continuous membership reconciliation (CMR) feature exists in addition to, and not as a replacement for, -[explicit replica management](#replica-management). In certain cases where nodes are permanently removed +[explicit replica management](#member-management). In certain cases where nodes are permanently removed from the cluster, explicitly removing quorum queue replicas may still be necessary. ::: -In addition to controlling quorum queue replica membership by using the initial target size and [explicit replica management](#replica-management), +In addition to controlling quorum queue replica membership by using the initial target size and [explicit replica management](#member-management), nodes can be configured to automatically try to grow the quorum queue replica membership to a configured target replica number (group size) by enabling the continuous membership reconciliation feature. @@ -745,7 +745,7 @@ will be transferred if a re-joining replica is behind the leader. This "catching does not affect leader availability. Except for the initial replica set selection, replicas must be explicitly added to a quorum queue. -When a new replica is [added](#replica-management), it will synchronise the entire queue state +When a new replica is [added](#member-management), it will synchronise the entire queue state from the leader, similarly to classic mirrored queues. ### Fault Tolerance and Minimum Number of Replicas Online {#quorum-requirements} diff --git a/versioned_docs/version-3.13/streams.md b/versioned_docs/version-3.13/streams.md index 8c267ba50..bcbd877a4 100644 --- a/versioned_docs/version-3.13/streams.md +++ b/versioned_docs/version-3.13/streams.md @@ -461,7 +461,7 @@ be since more work has to be done to replicate data and achieve consensus. The `x-initial-cluster-size` queue argument controls how many rabbit nodes the initial stream cluster should span. -### Managing Stream Replicas {#replica-management} +### Managing Stream Replicas {#member-management} Replicas of a stream are explicitly managed by the operator. When a new node is added to the cluster, it will host no stream replicas unless the operator explicitly adds it @@ -539,7 +539,7 @@ will be transferred if a re-joining replica is behind the leader. This "catching does not affect leader availability. Replicas must be explicitly added. -When a new replica is [added](#replica-management), it will synchronise the entire stream state +When a new replica is [added](#member-management), it will synchronise the entire stream state from the leader, similarly to newly added quorum queue replicas. ### Fault Tolerance and Minimum Number of Replicas Online {#quorum-requirements} diff --git a/versioned_docs/version-4.0/clustering.md b/versioned_docs/version-4.0/clustering.md index 80bda5daf..4a18392b8 100644 --- a/versioned_docs/version-4.0/clustering.md +++ b/versioned_docs/version-4.0/clustering.md @@ -1163,8 +1163,8 @@ rabbitmqctl forget_cluster_node -n rabbit@rabbit1 rabbit@rabbit2 ### What Happens to Quorum Queue and Stream Replicas? -When a node is removed from the cluster using CLI tools, all [quorum queue](./quorum-queues#replica-management) -and [stream replicas](./streams#replica-management) on the node will be removed, +When a node is removed from the cluster using CLI tools, all [quorum queue](./quorum-queues#member-management) +and [stream replicas](./streams#member-management) on the node will be removed, even if that means that queues and streams would temporarily have an even (e.g. two) replicas. ### Node Removal is Explicit (Manual) or Opt-in diff --git a/versioned_docs/version-4.0/quorum-queues/index.md b/versioned_docs/version-4.0/quorum-queues/index.md index 06fb91ab2..daa103de7 100644 --- a/versioned_docs/version-4.0/quorum-queues/index.md +++ b/versioned_docs/version-4.0/quorum-queues/index.md @@ -61,7 +61,7 @@ Topics covered in this document include: * [How are they different](#feature-comparison) from classic queues * Primary [use cases](#use-cases) of quorum queues and when not to use them * How to [declare a quorum queue](#usage) - * [Replication](#replication)-related topics: [replica management](#replica-management), [replica leader rebalancing](#replica-rebalancing), optimal number of replicas, etc + * [Replication](#replication)-related topics: [replica management](#member-management), [replica leader rebalancing](#replica-rebalancing), optimal number of replicas, etc * What guarantees quorum queues offer in terms of [leader failure handling](#leader-election), [data safety](#data-safety) and [availability](#availability) * Continuous [Membership Reconciliation](#replica-reconciliation) * The additional [dead lettering](#dead-lettering) features supported by quorum queues @@ -676,9 +676,9 @@ launched to run on a random subset of RabbitMQ nodes present in the cluster at d In case a quorum queue is declared before all cluster nodes have joined the cluster, and the initial replica count is greater than the total number of cluster members, the effective value used will be equal to the total number of cluster nodes. When more nodes join the cluster, the replica count -will not be automatically increased but it can be [increased by the operator](#replica-management). +will not be automatically increased but it can be [increased by the operator](#member-management). -### Managing Replicas {#replica-management} +### Managing Replicas {#member-management} Replicas of a quorum queue are explicitly managed by the operator. When a new node is added to the cluster, it will host no quorum queue replicas unless the operator explicitly adds it @@ -753,7 +753,7 @@ Once declared, the RabbitMQ quorum queue leaders may be unevenly distributed across the RabbitMQ cluster. To re-balance use the `rabbitmq-queues rebalance` command. It is important to know that this does not change the nodes which the quorum queues span. -To modify the membership instead see [managing replicas](#replica-management). +To modify the membership instead see [managing replicas](#member-management). ```bash # rebalances all quorum queues @@ -778,11 +778,11 @@ rabbitmq-queues rebalance quorum --vhost-pattern "production.*" :::important The continuous membership reconciliation (CMR) feature exists in addition to, and not as a replacement for, -[explicit replica management](#replica-management). In certain cases where nodes are permanently removed +[explicit replica management](#member-management). In certain cases where nodes are permanently removed from the cluster, explicitly removing quorum queue replicas may still be necessary. ::: -In addition to controlling quorum queue replica membership by using the initial target size and [explicit replica management](#replica-management), +In addition to controlling quorum queue replica membership by using the initial target size and [explicit replica management](#member-management), nodes can be configured to automatically try to grow the quorum queue replica membership to a configured target group size by enabling the continuous membership reconciliation feature. @@ -973,7 +973,7 @@ does not require a full re-synchronization from the currently elected leader. On will be transferred if a re-joining replica is behind the leader. This "catching up" process does not affect leader availability. -When a new replica is [added](#replica-management), it will synchronise the entire queue state +When a new replica is [added](#member-management), it will synchronise the entire queue state from the leader. ### Fault Tolerance and Minimum Number of Replicas Online {#quorum-requirements} diff --git a/versioned_docs/version-4.0/streams.md b/versioned_docs/version-4.0/streams.md index 929b92aa9..a60fa21b5 100644 --- a/versioned_docs/version-4.0/streams.md +++ b/versioned_docs/version-4.0/streams.md @@ -459,7 +459,7 @@ be since more work has to be done to replicate data and achieve consensus. The `x-initial-cluster-size` queue argument controls how many rabbit nodes the initial stream cluster should span. -### Managing Stream Replicas {#replica-management} +### Managing Stream Replicas {#member-management} Replicas of a stream are explicitly managed by the operator. When a new node is added to the cluster, it will host no stream replicas unless the operator explicitly adds it @@ -537,7 +537,7 @@ will be transferred if a re-joining replica is behind the leader. This "catching does not affect leader availability. Replicas must be explicitly added. -When a new replica is [added](#replica-management), it will synchronise the entire stream state +When a new replica is [added](#member-management), it will synchronise the entire stream state from the leader, similarly to newly added quorum queue replicas. ### Fault Tolerance and Minimum Number of Replicas Online {#quorum-requirements} diff --git a/versioned_docs/version-4.1/clustering.md b/versioned_docs/version-4.1/clustering.md index 80bda5daf..4a18392b8 100644 --- a/versioned_docs/version-4.1/clustering.md +++ b/versioned_docs/version-4.1/clustering.md @@ -1163,8 +1163,8 @@ rabbitmqctl forget_cluster_node -n rabbit@rabbit1 rabbit@rabbit2 ### What Happens to Quorum Queue and Stream Replicas? -When a node is removed from the cluster using CLI tools, all [quorum queue](./quorum-queues#replica-management) -and [stream replicas](./streams#replica-management) on the node will be removed, +When a node is removed from the cluster using CLI tools, all [quorum queue](./quorum-queues#member-management) +and [stream replicas](./streams#member-management) on the node will be removed, even if that means that queues and streams would temporarily have an even (e.g. two) replicas. ### Node Removal is Explicit (Manual) or Opt-in diff --git a/versioned_docs/version-4.1/quorum-queues/index.md b/versioned_docs/version-4.1/quorum-queues/index.md index 351616ff8..2bf9dd4eb 100644 --- a/versioned_docs/version-4.1/quorum-queues/index.md +++ b/versioned_docs/version-4.1/quorum-queues/index.md @@ -61,7 +61,7 @@ Topics covered in this document include: * [How are they different](#feature-comparison) from classic queues * Primary [use cases](#use-cases) of quorum queues and when not to use them * How to [declare a quorum queue](#usage) - * [Replication](#replication)-related topics: [replica management](#replica-management), [replica leader rebalancing](#replica-rebalancing), optimal number of replicas, etc + * [Replication](#replication)-related topics: [replica management](#member-management), [replica leader rebalancing](#replica-rebalancing), optimal number of replicas, etc * What guarantees quorum queues offer in terms of [leader failure handling](#leader-election), [data safety](#data-safety) and [availability](#availability) * Continuous [Membership Reconciliation](#replica-reconciliation) * The additional [dead lettering](#dead-lettering) features supported by quorum queues @@ -676,9 +676,9 @@ launched to run on a random subset of RabbitMQ nodes present in the cluster at d In case a quorum queue is declared before all cluster nodes have joined the cluster, and the initial replica count is greater than the total number of cluster members, the effective value used will be equal to the total number of cluster nodes. When more nodes join the cluster, the replica count -will not be automatically increased but it can be [increased by the operator](#replica-management). +will not be automatically increased but it can be [increased by the operator](#member-management). -### Managing Replicas {#replica-management} +### Managing Replicas {#member-management} Replicas of a quorum queue are explicitly managed by the operator. When a new node is added to the cluster, it will host no quorum queue replicas unless the operator explicitly adds it @@ -753,7 +753,7 @@ Once declared, the RabbitMQ quorum queue leaders may be unevenly distributed across the RabbitMQ cluster. To re-balance use the `rabbitmq-queues rebalance` command. It is important to know that this does not change the nodes which the quorum queues span. -To modify the membership instead see [managing replicas](#replica-management). +To modify the membership instead see [managing replicas](#member-management). ```bash # rebalances all quorum queues @@ -778,11 +778,11 @@ rabbitmq-queues rebalance quorum --vhost-pattern "production.*" :::important The continuous membership reconciliation (CMR) feature exists in addition to, and not as a replacement for, -[explicit replica management](#replica-management). In certain cases where nodes are permanently removed +[explicit replica management](#member-management). In certain cases where nodes are permanently removed from the cluster, explicitly removing quorum queue replicas may still be necessary. ::: -In addition to controlling quorum queue replica membership by using the initial target size and [explicit replica management](#replica-management), +In addition to controlling quorum queue replica membership by using the initial target size and [explicit replica management](#member-management), nodes can be configured to automatically try to grow the quorum queue replica membership to a configured target group size by enabling the continuous membership reconciliation feature. @@ -810,7 +810,7 @@ are expected to come back and only a minority (often just one) node is stopped f #### CMR Configuration -##### `rabbitmq.conf` +##### Via `rabbitmq.conf` @@ -973,7 +973,7 @@ does not require a full re-synchronization from the currently elected leader. On will be transferred if a re-joining replica is behind the leader. This "catching up" process does not affect leader availability. -When a new replica is [added](#replica-management), it will synchronise the entire queue state +When a new replica is [added](#member-management), it will synchronise the entire queue state from the leader. ### Fault Tolerance and Minimum Number of Replicas Online {#quorum-requirements} diff --git a/versioned_docs/version-4.1/streams.md b/versioned_docs/version-4.1/streams.md index 9f2ce3ef9..7d6e80651 100644 --- a/versioned_docs/version-4.1/streams.md +++ b/versioned_docs/version-4.1/streams.md @@ -459,7 +459,7 @@ be since more work has to be done to replicate data and achieve consensus. The `x-initial-cluster-size` queue argument controls how many rabbit nodes the initial stream cluster should span. -### Managing Stream Replicas {#replica-management} +### Managing Stream Replicas {#member-management} Replicas of a stream are explicitly managed by the operator. When a new node is added to the cluster, it will host no stream replicas unless the operator explicitly adds it @@ -537,7 +537,7 @@ will be transferred if a re-joining replica is behind the leader. This "catching does not affect leader availability. Replicas must be explicitly added. -When a new replica is [added](#replica-management), it will synchronise the entire stream state +When a new replica is [added](#member-management), it will synchronise the entire stream state from the leader, similarly to newly added quorum queue replicas. ### Fault Tolerance and Minimum Number of Replicas Online {#quorum-requirements} From e54721af42d7cc09e6bec5910ad1b63389719dca Mon Sep 17 00:00:00 2001 From: Michael Klishin Date: Tue, 29 Apr 2025 22:31:28 -0400 Subject: [PATCH 3/3] QQ guide: cosmetics --- docs/quorum-queues/index.md | 2 +- versioned_docs/version-4.0/quorum-queues/index.md | 2 +- 2 files changed, 2 insertions(+), 2 deletions(-) diff --git a/docs/quorum-queues/index.md b/docs/quorum-queues/index.md index 2da762297..c3fe15d87 100644 --- a/docs/quorum-queues/index.md +++ b/docs/quorum-queues/index.md @@ -812,7 +812,7 @@ are expected to come back and only a minority (often just one) node is stopped f #### CMR Configuration -##### `rabbitmq.conf` +##### Via `rabbitmq.conf`
        Continuous Membership Reconciliation (CMR) Settings
        diff --git a/versioned_docs/version-4.0/quorum-queues/index.md b/versioned_docs/version-4.0/quorum-queues/index.md index daa103de7..2cebf45e9 100644 --- a/versioned_docs/version-4.0/quorum-queues/index.md +++ b/versioned_docs/version-4.0/quorum-queues/index.md @@ -810,7 +810,7 @@ are expected to come back and only a minority (often just one) node is stopped f #### CMR Configuration -##### `rabbitmq.conf` +##### Via `rabbitmq.conf`
        Continuous Membership Reconciliation (CMR) Settings
        Continuous Membership Reconciliation (CMR) Settings