|
| 1 | +.. _performance-issues-psa: |
| 2 | + |
| 3 | +================================================ |
| 4 | +Mitigate Performance Issues with PSA Replica Set |
| 5 | +================================================ |
| 6 | + |
| 7 | +.. default-domain:: mongodb |
| 8 | + |
| 9 | +.. contents:: On this page |
| 10 | + :local: |
| 11 | + :backlinks: none |
| 12 | + :depth: 1 |
| 13 | + :class: singlecol |
| 14 | + |
| 15 | +Overview |
| 16 | +-------- |
| 17 | + |
| 18 | +In a three-member replica set with a primary-secondary-arbiter (PSA) |
| 19 | +architecture or a sharded cluster with three-member PSA shards, a |
| 20 | +data-bearing node that is down or lagged can lead to performance issues. |
| 21 | + |
| 22 | +If one data-bearing node goes down, the other node becomes the primary. |
| 23 | +Writes with :writeconcern:`w:1 <\<number\>>` continue to succeed in this |
| 24 | +state but writes with write concern :writeconcern:`"majority"` cannot |
| 25 | +succeed and the commit point starts to lag. If your PSA replica set |
| 26 | +contains a lagged secondary and your replica set requires two nodes to |
| 27 | +majority commit a change, your commit point also lags. |
| 28 | + |
| 29 | +With a lagged commit point, two things can affect your cluster |
| 30 | +performance: |
| 31 | + |
| 32 | +- The storage engine keeps **all** changes that happen after the commit |
| 33 | + point on disk to retain a :term:`durable` history. The extra I/O from |
| 34 | + these writes tends to increase over time. This can greatly impact |
| 35 | + write performance and increase cache pressure. |
| 36 | +- MongoDB allows the :ref:`oplog <replica-set-oplog>` to grow past its |
| 37 | + configured size limit to avoid deleting the :data:`majority commit |
| 38 | + point <replSetGetStatus.optimes.lastCommittedOpTime>`. |
| 39 | + |
| 40 | +To reduce the cache pressure and increased write traffic, set |
| 41 | +:rsconf:`votes: 0 <members[n].votes>` and :rsconf:`priority: 0 |
| 42 | +<members[n].priority>` for the node that is unavailable or lagging. For |
| 43 | +write operations issued with "majority", only voting members are |
| 44 | +considered to determine the number of nodes needed to perform a majority |
| 45 | +commit. Setting the configuration of the node to :rsconf:`votes: 0 |
| 46 | +<members[n].votes>` reduces the number of nodes required to commit a |
| 47 | +write with write concern :writeconcern:`"majority"` from two to one and |
| 48 | +allows these writes to succeed. |
| 49 | + |
| 50 | +If you want to later change :rsconf:`votes <members[n].votes>` back to a |
| 51 | +non-zero number, use the :method:`rs.reconfigForPSASet()` method. |
| 52 | + |
| 53 | +.. note:: |
| 54 | + |
| 55 | + In earlier versions of MongoDB, |
| 56 | + :setting:`~replication.enableMajorityReadConcern` and |
| 57 | + :option:`--enableMajorityReadConcern` were configurable allowing you |
| 58 | + to disable the default read concern :readconcern:`"majority"` which |
| 59 | + had a similar effect. |
| 60 | + |
| 61 | +Procedure |
| 62 | +--------- |
| 63 | + |
| 64 | +To reduce the cache pressure and increased write traffic for a |
| 65 | +deployment with a three-member primary-secondary-arbiter (PSA) |
| 66 | +architecture, set ``{ votes: 0, priority: 0 }`` for the secondary that |
| 67 | +is unavailable or lagging: |
| 68 | + |
| 69 | +.. code-block:: javascript |
| 70 | + |
| 71 | + cfg = rs.conf(); |
| 72 | + cfg["members"][<array_index>]["votes"] = 0; |
| 73 | + cfg["members"][<array_index>]["priority"] = 0; |
| 74 | + rs.reconfig(cfg); |
| 75 | + |
| 76 | + |
| 77 | +If you want to change the configuration of the secondary later, use the |
| 78 | +:method:`rs.reconfigForPSASet()` method. |
0 commit comments