Skip to content

DOCS-303 shard cluster to sharded cluster #311

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 1 commit into from
Oct 12, 2012
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion draft/core/write-operations.txt
Original file line number Diff line number Diff line change
Expand Up @@ -46,4 +46,4 @@ Architecture
.. ordered list:

- atomicity
- replica sets / shard clusters
- replica sets / sharded clusters
Original file line number Diff line number Diff line change
Expand Up @@ -81,7 +81,7 @@ in the :ref:`sharding-pre-splitting` section below.
Uneven distribution occurs in the following cases:

- You insert a large volume of data that is not evenly distributed. Even
if the :term:`sharded cluster <shard cluster>` contains existing
if the :term:`sharded cluster` contains existing
documents balanced over multiple chunks, the inserted data might
include values that write disproportionately to a small number of
chunks.
Expand Down
2 changes: 1 addition & 1 deletion draft/use-cases/gaming-user-state.txt
Original file line number Diff line number Diff line change
Expand Up @@ -570,7 +570,7 @@ Sharding
--------

If your system needs to scale beyond a single MongoDB instance node,
you will want to use a :term:`shard cluster`, which takes advantage of
you will want to use a :term:`sharded cluster`, which takes advantage of
MongoDB's :term:`sharding` functionality.

.. see:: ":doc:`/faq/sharding`" and the ":wiki:`Sharding` wiki page.
Expand Down
6 changes: 3 additions & 3 deletions meta.style-guide.rst
Original file line number Diff line number Diff line change
Expand Up @@ -157,7 +157,7 @@ Referencing

Type this:

To deploy a shard cluster for an existing replica set, see
To deploy a sharded cluster for an existing replica set, see
:doc:`/tutorial/convert-replica-set-to-replicated-shard-cluster`.

General Formulations
Expand Down Expand Up @@ -277,7 +277,7 @@ disk.)
Distributed System Terms
~~~~~~~~~~~~~~~~~~~~~~~~

- Refer to partitioned systems as "shard clusters," over other
- Refer to partitioned systems as "sharded clusters," over other
variants. (e.g. sharded clusters, or sharded systems.)

- Refer configurations that run with replication as "replica sets" (or
Expand Down Expand Up @@ -317,7 +317,7 @@ Notes on Specific Features
Other Terms
~~~~~~~~~~~

- Use "**shard cluster**," to refer to a collection of ``mongod``
- Use "**sharded cluster**," to refer to a collection of ``mongod``
instances that hold a sharded data set. Use the term "**replica
set**," to refer to a collection of ``mongod`` instances that
provide a replicated data set. Do not use the word "cluster" to
Expand Down
4 changes: 2 additions & 2 deletions source/administration/backups.txt
Original file line number Diff line number Diff line change
Expand Up @@ -18,7 +18,7 @@ process is crucial for every production-grade deployment. Take the
specific features of your deployment, your use patterns, and
architecture into consideration as you develop your own backup system.

:term:`Replica sets <replica set>` and :term:`sharded clusters <shard cluster>`
:term:`Replica sets <replica set>` and :term:`sharded clusters`
require special considerations. Don't miss the :ref:`backup
considerations for sharded clusters and replica sets
<backups-with-sharding-and-replication>`.
Expand Down Expand Up @@ -595,7 +595,7 @@ binary dump of each database instance using :ref:`binary dump methods

These backups must not only capture the database in a consistent
state, as described in the aforementioned procedures, but the
:term:`sharded cluster <shard cluster>` needs to be consistent in itself. Also, disable
:term:`sharded cluster` needs to be consistent in itself. Also, disable
the balancer process that equalizes the distribution of data among the
:term:`shards <shard>` before taking the backup.

Expand Down
4 changes: 2 additions & 2 deletions source/administration/monitoring.txt
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,7 @@ waiting for a crisis or failure.
This document provides an overview of the available tools and data
provided by MongoDB as well as introduction to diagnostic strategies,
and suggestions for monitoring instances in MongoDB's replica sets and
shard clusters.
sharded clusters.

.. note::

Expand Down Expand Up @@ -488,7 +488,7 @@ instances become unavailable. However, clusters remain
accessible from already-running :program:`mongos` instances.

Because inaccessible configuration servers can have a serious impact
on the availability of a shard cluster, you should monitor the
on the availability of a sharded cluster, you should monitor the
configuration servers to ensure that the cluster remains well
balanced and that :program:`mongos` instances can restart.

Expand Down
2 changes: 1 addition & 1 deletion source/administration/replica-sets.txt
Original file line number Diff line number Diff line change
Expand Up @@ -171,7 +171,7 @@ other members in the set will not advertise the hidden member in the

.. versionchanged:: 2.0

For :term:`sharded clusters <shard cluster>` running with replica sets before 2.0 if
For :term:`sharded clusters` running with replica sets before 2.0 if
you reconfigured a member as hidden, you *had* to restart
:program:`mongos` to prevent queries from reaching the hidden
member.
Expand Down
4 changes: 2 additions & 2 deletions source/administration/sharding-architectures.txt
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,7 @@ Sharded Cluster Architectures
.. default-domain:: mongodb

This document describes the organization and design of :term:`sharded
cluster <shard cluster>` deployments.
cluster` deployments.

.. seealso:: The :doc:`/administration/sharding` document, the
":ref:`Sharding Requirements <sharding-requirements>`" section,
Expand Down Expand Up @@ -90,7 +90,7 @@ that *are not* sharded reside on the primary for their database. Use
the :dbcommand:`movePrimary` command to change the primary shard for a
database. Use the :dbcommand:`printShardingStatus` command or the
:method:`sh.status()` to see an overview of the cluster, which contains
information about the chunk and database distribution within the
information about the :term:`chunk` and database distribution within the
cluster.

.. warning::
Expand Down
16 changes: 8 additions & 8 deletions source/administration/sharding.txt
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,7 @@ clusters. For a full introduction to sharding in MongoDB see
:doc:`/core/sharding`, and for a complete overview of all sharding
documentation in the MongoDB Manual, see :doc:`/sharding`. The
:doc:`/administration/sharding-architectures` document provides an
overview of deployment possibilities to help deploy a shard
overview of deployment possibilities to help deploy a sharded
cluster. Finally, the :doc:`/core/sharding-internals` document
provides a more detailed introduction to sharding when troubleshooting
issues or understanding your cluster's behavior.
Expand Down Expand Up @@ -192,7 +192,7 @@ use the following procedure as a quick starting point:
Cluster Management
------------------

Once you have a running shard cluster, you will need to maintain it.
Once you have a running sharded cluster, you will need to maintain it.
This section describes common maintenance procedure, including: how to
add and remove nodes, how to manually split chunks, and how to disable
the balancer for backups.
Expand All @@ -213,7 +213,7 @@ command:
Add a Shard to a Cluster
~~~~~~~~~~~~~~~~~~~~~~~~

To add a shard to an *existing* shard cluster, use the following
To add a shard to an *existing* sharded cluster, use the following
procedure:

#. Connect to a :program:`mongos` in the cluster using the
Expand Down Expand Up @@ -405,7 +405,7 @@ Chunk Management
This section describes various operations on :term:`chunks <chunk>` in
:term:`sharded clusters <sharded cluster>`. MongoDB automates these
processes; however, in some cases, particularly when you're setting up
a shard cluster, you may need to create and manipulate chunks
a sharded cluster, you may need to create and manipulate chunks
directly.

.. _sharding-procedure-create-split:
Expand Down Expand Up @@ -558,7 +558,7 @@ To create and migrate chunks manually, use the following procedure:
Modify Chunk Size
~~~~~~~~~~~~~~~~~

When you initialize a shard cluster, the default chunk size is 64
When you initialize a sharded cluster, the default chunk size is 64
megabytes. This default chunk size works well for most deployments. However, if you
notice that automatic migrations are incurring a level of I/O that
your hardware cannot handle, you may want to reduce the chunk
Expand Down Expand Up @@ -790,7 +790,7 @@ be able to migrate chunks:
two digit hour and minute values (e.g ``HH:MM``) that describe the
beginning and end boundaries of the balancing window.
These times will be evaluated relative to the time zone of each individual
:program:`mongos` instance in the shard cluster.
:program:`mongos` instance in the sharded cluster.
For instance, running the following
will force the balancer to run between 11PM and 6AM local time only:

Expand Down Expand Up @@ -1109,7 +1109,7 @@ default chunk size is configurable with the :setting:`chunkSize`
setting, these behaviors help prevent unnecessary chunk migrations,
which can degrade the performance of your cluster as a whole.

If you have just deployed a shard cluster, make sure that you have
If you have just deployed a sharded cluster, make sure that you have
enough data to make sharding effective. If you do not have sufficient
data to create more than eight 64 megabyte chunks, then all data will
remain on one shard. Either lower the :ref:`chunk size
Expand Down Expand Up @@ -1144,7 +1144,7 @@ to correct this pattern.
The Cluster does not Balance
~~~~~~~~~~~~~~~~~~~~~~~~~~~~

If you have just deployed your shard cluster, you may want to
If you have just deployed your sharded cluster, you may want to
consider the :ref:`troubleshooting suggestions for a new cluster where
data remains on a single shard <sharding-troubleshooting-not-splitting>`.

Expand Down
10 changes: 5 additions & 5 deletions source/applications/replication.txt
Original file line number Diff line number Diff line change
Expand Up @@ -7,10 +7,10 @@ Application Development with Replica Sets
From the perspective of a client application, whether a MongoDB
instance is running as a single server (i.e. "standalone") or a :term:`replica set`
is transparent. However, replica sets
offer some configuration options for write and read operations. [#shard-clusters]_
offer some configuration options for write and read operations. [#sharded-clusters]_
This document describes those options and their implications.

.. [#shard-clusters] :term:`Shard clusters <shard cluster>` where the
.. [#sharded-clusters] :term:`Sharded clusters <sharded cluster>` where the
shards are also replica sets provide the same configuration options
with regards to write and read operations.

Expand Down Expand Up @@ -529,7 +529,7 @@ Member Selection
````````````````

Both clients, by way of their drivers, and :program:`mongos` instances for
shard clusters send periodic "ping," messages to all member of the
sharded clusters send periodic "ping," messages to all member of the
replica set to determine latency from the application to each
:program:`mongod` instance.

Expand Down Expand Up @@ -573,10 +573,10 @@ Sharding and ``mongos``

In most :term:`sharded clusters <sharded cluster>`, a :term:`replica set`
provides each shard where read preferences are also applicable. Read
operations in a shard cluster, with regard to read preference, are
operations in a sharded cluster, with regard to read preference, are
identical to unsharded replica sets.

Unlike simple replica sets, in shard clusters, all interactions with
Unlike simple replica sets, in sharded clusters, all interactions with
the shards pass from the clients to the :program:`mongos` instances
that are actually connected to the set members. :program:`mongos` is
responsible for the application of the read preferences, which is
Expand Down
6 changes: 3 additions & 3 deletions source/core/sharding-internals.txt
Original file line number Diff line number Diff line change
Expand Up @@ -21,7 +21,7 @@ Shard Keys
----------

Shard keys are the field in a collection that MongoDB uses to
distribute :term:`documents <document>` within a shard cluster. See the
distribute :term:`documents <document>` within a sharded cluster. See the
:ref:`overview of shard keys <sharding-shard-key>` for an
introduction to these topics.

Expand Down Expand Up @@ -169,7 +169,7 @@ compound shard key. The data may become more splitable with a
compound shard key.

.. see:: ":ref:`sharding-mongos`" for more information on query
operations in the context of shard clusters.
operations in the context of sharded clusters.

.. [#shard-key-index] In many ways, you can think of the shard key a
cluster-wide unique index. However, be aware that sharded systems
Expand Down Expand Up @@ -402,7 +402,7 @@ than two.*

The specification of the balancing window is relative to the local
time zone of all individual :program:`mongos` instances in the
shard cluster.
sharded cluster.

.. index:: sharding; chunk size
.. _sharding-chunk-size:
Expand Down
20 changes: 10 additions & 10 deletions source/core/sharding.txt
Original file line number Diff line number Diff line change
Expand Up @@ -114,7 +114,7 @@ You should consider deploying a :term:`sharded cluster`, if:
If these attributes are not present in your system, sharding will only
add additional complexity to your system without providing much
benefit. When designing your data model, if you will eventually need a
shard cluster, consider which collections you will want to shard and
sharded cluster, consider which collections you will want to shard and
the corresponding shard keys.

.. _sharding-capacity-planning:
Expand Down Expand Up @@ -169,7 +169,7 @@ A :term:`sharded cluster` has the following components:
MongoDB enables data :term:`partitioning <partition>`, or
sharding, on a *per collection* basis. You *must* access all data
in a sharded cluster via the :program:`mongos` instances as below.
If you connect directly to a :program:`mongod` in a shard cluster
If you connect directly to a :program:`mongod` in a sharded cluster
you will see its fraction of cluster's data. The data on any
given shard may be somewhat random: MongoDB provides no grantee
that any two contiguous chunks will reside on a single shard.
Expand Down Expand Up @@ -304,7 +304,7 @@ within a :term:`sharded cluster`. Without a config database, the
operations within the cluster.

Config servers *do not* run as replica sets. Instead, a :term:`cluster
<shard cluster>` operates with a group of *three* config servers that use a
<sharded cluster>` operates with a group of *three* config servers that use a
two-phase commit process that ensures immediate consistency and
reliability.

Expand All @@ -330,7 +330,7 @@ database. MongoDB only writes data to the config server to:
- migrate a chunk between shards.

Additionally, all config servers must be available on initial setup
of a shard cluster, each :program:`mongos` instance must be able
of a sharded cluster, each :program:`mongos` instance must be able
to write to the ``config.version`` collection.

If one or two configuration instances become unavailable, the
Expand All @@ -354,15 +354,15 @@ queries or write operations to the cluster.

Because the configuration data is small relative to the amount of data
stored in a cluster, the amount of activity is relatively low, and 100%
up time is not required for a functioning shard cluster. As a result,
up time is not required for a functioning sharded cluster. As a result,
backing up the config servers is not difficult. Backups of config
servers are critical as clusters become totally inoperable when
you lose all configuration instances and data. Precautions to ensure
that the config servers remain available and intact are critical.

.. note::

Configuration servers store metadata for a single shard cluster.
Configuration servers store metadata for a single sharded cluster.
You must have a separate configuration server or servers for each
cluster you administer.

Expand Down Expand Up @@ -561,18 +561,18 @@ Security
enforce read-only limitations.

.. versionchanged:: 2.0
Shard clusters support authentication. Previously, in version
Sharded clusters support authentication. Previously, in version
1.8, sharded clusters will not support authentication and access
control. You must run your sharded systems in trusted
environments.

To control access to a shard cluster, you must set the
:setting:`keyFile` option on all components of the shard cluster. Use
To control access to a sharded cluster, you must set the
:setting:`keyFile` option on all components of the sharded cluster. Use
the :option:`--keyFile <mongos --keyFile>` run-time option or the
:setting:`keyFile` configuration option for all :program:`mongos`,
configuration instances, and shard :program:`mongod` instances.

There are two classes of security credentials in a shard cluster:
There are two classes of security credentials in a sharded cluster:
credentials for "admin" users (i.e. for the :term:`admin database`) and
credentials for all other databases. These credentials reside in
different locations within the cluster and have different roles:
Expand Down
2 changes: 1 addition & 1 deletion source/faq/developers.txt
Original file line number Diff line number Diff line change
Expand Up @@ -280,7 +280,7 @@ occur simultaneously.

In standalone and :term:`replica sets <replica set>` the lock's scope
applies to a single :program:`mongod` instance or :term:`primary`
instance. In a shard cluster, locks apply to each individual shard,
instance. In a sharded cluster, locks apply to each individual shard,
not to the whole cluster.

A more granular approach to locking will appear in MongoDB v2.2. For
Expand Down
6 changes: 3 additions & 3 deletions source/faq/sharding.txt
Original file line number Diff line number Diff line change
Expand Up @@ -81,7 +81,7 @@ How does MongoDB distribute queries among shards?

The exact method for distributing queries to :term:`shards <shard>` in a
:term:`cluster <sharded cluster>` depends on the nature of the query and the configuration of
the shard cluster. Consider a sharded collection, using the
the sharded cluster. Consider a sharded collection, using the
:term:`shard key` ``user_id``, that has ``last_login`` and
``email`` attributes:

Expand Down Expand Up @@ -274,7 +274,7 @@ Can shard keys be randomly generated?
:term:`Shard keys <shard key>` can be random. Random keys ensure
optimal distribution of data across the cluster.

:term:`Shard clusters <shard cluster>`, attempt to route queries to
:term:`Sharded clusters <sharded cluster>`, attempt to route queries to
*specific* shards when queries include the shard key as a parameter,
because these directed queries are more efficient. In many cases,
random keys can make it difficult to direct queries to specific
Expand All @@ -291,7 +291,7 @@ the shard key.
However, documents that have the shard key *must* reside in the same
*chunk* and therefore on the same server. If your sharded data set has
too many documents with the exact same shard key you will not be able
to distribute *those* documents across your shard cluster.
to distribute *those* documents across your sharded cluster.

.. STUB link to shard key granularity.

Expand Down
2 changes: 1 addition & 1 deletion source/includes/note-config-server-startup.rst
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
.. note::

All config servers must be running and available when you first initiate
a :term:`shard cluster`.
a :term:`sharded cluster`.
2 changes: 1 addition & 1 deletion source/includes/note-conn-pool-stats.rst
Original file line number Diff line number Diff line change
Expand Up @@ -2,4 +2,4 @@

:dbcommand:`connPoolStats` only returns meaningful results for
:program:`mongos` instances and for :program:`mongod` instances
in shard clusters.
in sharded clusters.
4 changes: 2 additions & 2 deletions source/reference/command/fsync.txt
Original file line number Diff line number Diff line change
Expand Up @@ -89,8 +89,8 @@ fsync
.. note::

:dbcommand:`fsync` lock is only possible on individual shards of
a shard cluster, not on the entire shard cluster. To backup an
entire shard cluster, please read :ref:`considerations for
a sharded cluster, not on the entire sharded cluster. To backup an
entire sharded cluster, please read :ref:`considerations for
backing up sharded clusters <backups-with-sharding-and-replication>`.

If your :program:`mongod` has :term:`journaling <journal>`
Expand Down
2 changes: 1 addition & 1 deletion source/reference/command/group.txt
Original file line number Diff line number Diff line change
Expand Up @@ -82,7 +82,7 @@ group
.. warning::

:method:`group()` does not work in :term:`shard environments
<shard cluster>`. Use the :term:`aggregation framework` or
<sharded cluster>`. Use the :term:`aggregation framework` or
:term:`map-reduce` (i.e. :command:`mapReduce` in :term:`sharded
environments <sharding>`.

Expand Down
Loading