@@ -794,7 +794,7 @@ bulk insert operations:
794794 shards. To avoid this performance cost, you can pre-split the
795795 collection, as described in :ref:`sharding-administration-pre-splitting`.
796796
797- - You can parallels import by sending insert operations to more than
797+ - You can parallel- import by sending insert operations to more than
798798 one :program:`mongos` instance. If the collection is empty,
799799 pre-split first, as described in
800800 :ref:`sharding-administration-pre-splitting`.
@@ -810,7 +810,7 @@ bulk insert operations:
810810 increasing shard key, then consider the following modifications to
811811 your application:
812812
813- - Reverse all the bits of the shard key to preserves the information
813+ - Reverse all the bits of the shard key to preserve the information
814814 while avoiding the correlation of insertion order and increasing
815815 sequence of values.
816816
@@ -994,8 +994,8 @@ all migration, use the following procedure:
994994
995995.. note::
996996
997- If a migration is in progress progress , the system will complete
998- the in progress migration. After disabling, you can use the
997+ If a migration is in progress, the system will complete
998+ the in- progress migration. After disabling, you can use the
999999 following operation in the :program:`mongo` shell to determine if
10001000 there are no migrations in progress:
10011001
@@ -1233,7 +1233,7 @@ of the cluster metadata from the config database is straight forward:
12331233
12341234.. seealso:: :doc:`backups`.
12351235
1236- .. [#read-only] While one of the three config servers unavailable, no
1236+ .. [#read-only] While one of the three config servers is unavailable,
12371237 the cluster cannot split any chunks nor can it migrate chunks
12381238 between shards. Your application will be able to write data to the
12391239 cluster. The :ref:`sharding-config-server` section of the
@@ -1291,7 +1291,7 @@ Finally, if your shard key has a low :ref:`cardinality
12911291<sharding-shard-key-cardinality>`, MongoDB may not be able to create
12921292sufficient splits among the data.
12931293
1294- One Shard Receives too much Traffic
1294+ One Shard Receives Too Much Traffic
12951295~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
12961296
12971297In some situations, a single shard or a subset of the cluster will
@@ -1307,7 +1307,7 @@ In the worst case, you may have to consider re-sharding your data
13071307and :ref:`choosing a different shard key <sharding-internals-choose-shard-key>`
13081308to correct this pattern.
13091309
1310- The Cluster does not Balance
1310+ The Cluster Does Not Balance
13111311~~~~~~~~~~~~~~~~~~~~~~~~~~~~
13121312
13131313If you have just deployed your sharded cluster, you may want to
@@ -1362,7 +1362,7 @@ consider the following options, depending on the nature of the impact:
13621362 :ref:`add one or two shards <sharding-procedure-add-shard>` to
13631363 the cluster to distribute load.
13641364
1365- It's also possible, that your shard key causes your
1365+ It's also possible that your shard key causes your
13661366application to direct all writes to a single shard. This kind of
13671367activity pattern can require the balancer to migrate most data soon after writing
13681368it. Consider redeploying your cluster with a shard key that provides
0 commit comments