Skip to content

Replication Docs Changes #35

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 15 commits into from
Jun 4, 2012
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
41 changes: 24 additions & 17 deletions source/administration/replica-sets.txt
Original file line number Diff line number Diff line change
Expand Up @@ -13,17 +13,19 @@ addition to general troubleshooting suggestions.

.. seealso::

- :func:`rs.status()` and :func:`db.isMaster()`
- :ref:`Replica Set Reconfiguration Process <replica-set-reconfiguration-usage>`
- :func:`rs.conf()` and :func:`rs.reconfig()`
- :doc:`/reference/replica-configuration`

The following tutorials provide task-oriented instructions for
specific administrative tasks related to replica set operation.

- :doc:`/tutorial/convert-replica-set-to-replicated-shard-cluster`
- :doc:`/tutorial/deploy-geographically-distributed-replica-set`
- :doc:`/tutorial/deploy-replica-set`
- :doc:`/tutorial/expand-replica-set`
- :doc:`/tutorial/convert-replica-set-to-replicated-shard-cluster`
- :doc:`/tutorial/deploy-geographically-distributed-replica-set`


Procedures
----------
Expand All @@ -44,7 +46,8 @@ From to time, you may need to add an additional member to an existing

- copy the data directory from an existing member. The new member
becomes a secondary, and will catch up to the current state of the
replica set after a short interval.
replica set after a short interval. By copying the data over
manually, replication is given a head-start.

If the difference in the amount of time between the most recent
operation and the most recent operation to the database exceeds the
Expand Down Expand Up @@ -81,7 +84,7 @@ example:
rs.add({_id: 1, host: "mongo2.example.net:27017", priority: 0, hidden: true})

This configures a :term:`hidden member` that is accessible at
``mongo2.example.net:27018``. See ":data:`host <members[n].host>`,"
``mongo2.example.net:27017``. See ":data:`host <members[n].host>`,"
":data:`priority <members[n].priority>`," and ":data:`hidden
<members[n].hidden>`" for more information about these settings. When
you specify a full configuration object with :func:`rs.add()`, you must
Expand All @@ -95,8 +98,8 @@ this case.
Removing Members
~~~~~~~~~~~~~~~~

Administrators can remove any member of a replica set at any time, for
a number of operational reasons. Use the :func:`rs.remove()` function
A member of a replica set may be removed at any time, for any number
of operational reasons. Use the :func:`rs.remove()` function
in the :program:`mongo` shell while connected to the current
:term:`primary`. Issue the :func:`db.isMaster()` command when
connected to *any* member of the set to determine the current
Expand Down Expand Up @@ -124,9 +127,10 @@ directly.
Replacing a Member
~~~~~~~~~~~~~~~~~~

There are two methods for replacing a member of a replica set. First,
you may remove and then re-add a member using the following procedure
in the :program:`mongo` shell:
There are two methods for replacing a member of a replica set.

First, you may remove and then re-add a member using the following
procedure in the :program:`mongo` shell:

.. code-block:: javascript

Expand Down Expand Up @@ -308,7 +312,7 @@ primary. Member ``3`` has a priority of ``2`` and will become primary,
if eligible, under most circumstances. Member ``2`` has a priority of
``1``, and will become primary if no node with a higher priority is
eligible to be primary. Since all additional nodes in the set will
also have a prio1rity of ``1`` by default, member ``2`` and all
also have a priority of ``1`` by default, member ``2`` and all
additional nodes will be equally likely to become primary if higher
priority nodes are not accessible. Finally, member ``1`` has a
priority of ``0.5``, which makes it less likely to become primary than
Expand Down Expand Up @@ -566,15 +570,15 @@ Possible causes of replication lag include:
Failover and Recovery
~~~~~~~~~~~~~~~~~~~~~

In most cases, failover occurs with out administrator intervention
seconds after the :term:`primary` steps down or becomes inaccessible
and ineligible to act as primary. If your MongoDB deployment does not
failover according to expectations, consider the following operational
errors:
In most cases, failover occurs without administrator intervention
seconds after the :term:`primary` steps down, becomes inaccessible,
or otherwise ineligible to act as primary. If your MongoDB deployment
does not failover according to expectations, consider the following
operational errors:

- No remaining member is able to form a majority. This can happen as a
result of network portions that render some members
inaccessible. Architect your deployment to ensure that a majority of
result of network partitions that render some members
inaccessible. Design your deployment to ensure that a majority of
set members can elect a primary in the same facility as core
application systems.

Expand All @@ -595,6 +599,9 @@ were never replicated to the set so that the data set is in a
consistent state. The :program:`mongod` program writes rolled back
data to a :term:`BSON`.

.. a *what*? TODO: Clarify what is meant by "program writes rolled
.. back data to a BSON"

You can prevent rollbacks prevented by ensuring safe writes by using
the appropriate :term:`write concern`.

Expand Down
22 changes: 11 additions & 11 deletions source/administration/replication-architectures.txt
Original file line number Diff line number Diff line change
Expand Up @@ -53,17 +53,17 @@ architectural conditions are true:

- The set has an odd number of voting members.

Deploy a single :ref:`arbiter <replica-set-arbiters>`, if you have
Deploy a single :ref:`arbiter <replica-set-arbiters>` if you have
an even number of voting replica set members.

- The set only has 7 voting members at any time.
- The set only has at most 7 voting members at any time.

- Every member with a :data:`priority <members[n].priority>` greater
than ``0`` can function as ``primary`` in a :term:`failover`
situation. If a member does not have this capability (i.e. resource
constraints,) set its ``priority`` value to ``0``.

- A majority of the set's members exist in the main data center.
- The majority of the set's members exist in the main data center.

.. seealso:: ":doc:`/tutorial/expand-replica-set`."

Expand All @@ -90,16 +90,16 @@ In many circumstances, these deployments consist of the following:
become primary (i.e. with a :data:`members[n].priority` value of
``0``.)

If any of the members fail, the replica set will still be able to
elect a primary node. If the connection between the data center fails,
the member or members in the second data center cannot become primary
If the primary node should fail, the replica set will be able to elect a
new primary node as normal. If the connection between the data center
fails, any members in the second data center cannot become primary
independently, and the nodes in the primary data center will continue
to function.

If the primary data center fails, recovering from the database
instance in the secondary facility requires manual intervention, but
with proper :term:`write concern` there will be no data loss and
downtime is typically be minimal.
downtime will typically be minimal.

For deployments that maintain three members the primary data center,
adding a node in a second data center will create an even number of
Expand All @@ -115,8 +115,8 @@ Hidden and Non-Voting Members

In some cases it may be useful to maintain a member of the set that
has an always up-to-date copy of the entire data set, but that cannot
become primary. Typically these members provide backups, support
reporting, or act as cold standbys in the clusters. There are three
become primary. Typical use-cases for these members include providing
backups, support reporting, or acting as cold standbys. There are three
settings relevant for these kinds of nodes:

- **Priority**: These members have :data:`members[n].priority`
Expand All @@ -140,7 +140,7 @@ settings relevant for these kinds of nodes:

.. note::

*All* members of a replica set vote in elections *except* for
All members of a replica set vote in elections *except* for
:ref:`non-voting <replica-set-non-voting-members>`
members. Priority, hidden, or delayed status does not affect a
member's ability to vote in an election.
Expand All @@ -155,7 +155,7 @@ primary, and that the :term:`replication lag` is minimal or
non-existent. You may wish to create a dedicated :ref:`hidden node
<replica-set-hidden-members>` for the purpose of creating backups.

If this node have journaling enabled, you can safely use standard
If this node has journaling enabled, you can safely use standard
:ref:`block level backup methods <block-level-backup>` to create a
backup of this node. Otherwise, if your underlying system does not
support snapshots, you can connect :program:`mongodump` to create a
Expand Down
4 changes: 4 additions & 0 deletions source/core/replication.txt
Original file line number Diff line number Diff line change
Expand Up @@ -144,6 +144,10 @@ participate in :term:`elections <election>`.
arbiter on a system with another work load such as an application
server or monitoring node.

.. note::
It is not recommended that arbiter processes be run on a server that is
an active member (primary of secondary) of it's replica set.

.. index:: replica set members; non-voting
.. _replica-set-non-voting-members:

Expand Down
71 changes: 39 additions & 32 deletions source/tutorial/deploy-replica-set.txt
Original file line number Diff line number Diff line change
Expand Up @@ -17,22 +17,30 @@ Overview

For most deployments, a simple 3 node replica set provides a
sufficient redundancy to survive most network partitions and other
system failures. A replica set of this size provides sufficient
capacity to host many distributed read operations. While MongoDB's
replica set functionality provides a great deal of flexibility and
specific definable node behaviors or types, it's best to avoid this
system failures. A replica set of this also size provides sufficient
capacity to host many distributed read operations.

While MongoDB's replica set functionality provides a great deal of flexibility
and specific definable node behaviors or types, it's best to avoid this
additional complexity until your application requires the functionality.
Until your circumstances require it, It is strongly recommended that
you avoid using the additional configuration options (delayed nodes,
hidden nodes, voting options, etc) unless your circumstances warrant
the additional complexity. Avoid premature optimization.


Requirements
------------

Three distinct systems, so that each system can run its own instance
of :program:`mongod`. For test systems you can run all three instances
of the :program:`mongod` process on a local system. e.g. a laptop) or
within a virtual instance. For production environments, you should
endeavor to maintain as much separation between the nodes: Deploy
replica set members on distinct hardware, and on systems that draw
power from different circuits, to the greatest extent possible.
of :program:`mongod`. For development systems you may run all three
instances of the :program:`mongod` process on a local system. (e.g. a
laptop) or within a virtual instance. For production environments, you
should endeavor to maintain as much separation between the nodes. For
example, when using VMs in Production, each node should live on
separate host servers, served by redundant power circuits, and with
redundant network paths.


Procedure
---------
Expand Down Expand Up @@ -75,9 +83,10 @@ following options for more information: :option:`--port <mongod --port>`,
option. You will also need to specify the :option:`--bind_ip
<mongod --bind_ip>` option.

Log in with the :program:`mongo` shell to the first host. If you're
accessing this command remotely, modify the hostname. using the
following command: ::
Connect to the mongod instance with the :program:`mongo` shell to the
first host. If you're running this command remotely, replace
"localhost" with the approriate hostname. Open a new terminal session
and enter the following: ::

mongo localhost:27017

Expand All @@ -104,8 +113,11 @@ replica set.
rs.add("localhost:27019")

Congratulations, after these commands return you will have a fully
functional replica set. You may have to wait several moments for the
new replica set to successfully elect a :term:`primary` node.
functional replica set. Within a couple moments the new replica set
should successfully elect a :term:`primary` node.

At any time, you may check the status of your replica set with the
:func:`rs.status()` command.

See the documentation of the following shell functions for more
information: :func:`rs.initiate()`, :func:`rs.conf()`,
Expand All @@ -125,8 +137,8 @@ Production replica sets are very similar to the development or testing
deployment described above, with the following differences:

- Each member of the replica set will reside on it's own machine, and
the MongoDB processes will all bind to port ``27017``, or the
standard MongoDB port.
the MongoDB processes will all bind to port ``27017`` (the
standard MongoDB port.)

- All runtime configuration will be specified in :doc:`configuration
files </reference/configuration-options>` rather than as
Expand Down Expand Up @@ -163,26 +175,21 @@ current node. The DNS or host names need to point and resolve to this
IP address. Configure network rules or a virtual private network
(i.e. "VPN") to permit this access.

.. note::

The portion of the :setting:`replSet` following the ``/``
provides a "seed list" of hosts that are known to be members of the
same replica set, which is used for fetching changed configurations
following restarts. It is acceptable to omit this section entirely,
and have the :setting:`replSet` option resemble:
.. TODO: file-based replset 'seed list'
.. The note that was here appeared to reference an example that has
.. changed. Currently there is very little info on seed lists
.. www.mongodb.org/display/DOCS/File+Based+Configuration
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

the seed list only affects the replica set initiation the first time, it's not ideal to recomend that users put a seedlist in their config files since the seedlist will remain in the file forever, but only has any impact if this option is set before the set is initiated.


.. code-block:: cfg

replSet = rs0

Store this file on each system, located at ``/etc/mongodb.conf`` on
the file system. See the documentation of the configuration options
See the documentation of the configuration options
used above: :setting:`dbpath`, :setting:`port`,
:setting:`replSet`, :setting:`bind_ip`, and
:setting:`fork`. Also consider
any additional :doc:`configuration options </reference/configuration-options>`
that your deployment may require.

Store this file on each system, located at ``/etc/mongodb.conf`` on
the file system.

On each system issue the following command to start the
:program:`mongod` process:

Expand Down Expand Up @@ -224,8 +231,8 @@ replica set.
rs.add("mongodb2.example.net")

Congratulations, after these commands return you will have a fully
functional replica set. You may have to wait several moments for the
new replica set to successfully elect a :term:`primary` node.
functional replica set. Within a couple moments the new replica set
should successfully elect a :term:`primary` node.

.. seealso:: The documentation of the following shell functions for
more information:
Expand Down