diff --git a/source/core/sharded-cluster-components.txt b/source/core/sharded-cluster-components.txt index 04eed14d858..bad46305c11 100644 --- a/source/core/sharded-cluster-components.txt +++ b/source/core/sharded-cluster-components.txt @@ -46,32 +46,33 @@ for a production sharded cluster deployment: Where possible, consider deploying one member of each replica set in a site suitable for being a disaster recovery location. -Sharding requires at least two shards to distribute sharded data. Single -shard sharded clusters may be useful if you plan on enabling sharding in the -near future, but do not need to at the time of deployment. - -Deploying multiple :binary:`~bin.mongos` routers supports high availability -and scalability. A common pattern is to place a :binary:`~bin.mongos` on -each application server. Deploying one :binary:`~bin.mongos` router on each -application server reduces network latency between the application and -the router. - -Alternatively, you can place a :binary:`~bin.mongos` router on each shard -primary. This approach also reduces network latency between the -application and the router: applications use a :doc:`connection -string ` listing all the hostnames of each shard primary. The MongoDB -driver then determines the network latency for each :binary:`~bin.mongos` -and load balances randomly across the routers that fall within a set -:ref:`latency window `. Ensure that the -server hosting the shard primary and :binary:`~bin.mongos` router has -sufficient capacity to accommodate the extra CPU and memory -requirements. +Sharding requires at least two shards to distribute sharded data. Single +shard sharded clusters may be useful if you plan on enabling sharding in +the near future, but do not need to at the time of deployment. + +Deploying multiple :binary:`~bin.mongos` routers supports high +availability and scalability. A common pattern is to place a +:binary:`~bin.mongos` on each application server. Deploying one +:binary:`~bin.mongos` router on each application server reduces network +latency between the application and the router. + +Alternatively, you can place a :binary:`~bin.mongos` router on dedicated +hosts. Large deployments benefit from this approach because it decouples +the number of client application servers from the number of +:binary:`~bin.mongos` instances. This gives greater control over the number +of connections the :binary:`~bin.mongod` instances serve. + +Installing :binary:`~bin.mongos` instances on their own hosts allows these +instances to use greater amounts of memory. Memory would not be shared +with a :binary:`~bin.mongod` instance. It is possible to use primary shards +to host :binary:`~bin.mongos` routers but be aware that memory contention may +become an issue on large deployments. There is no limit to the number of :binary:`~bin.mongos` routers you can -have in a deployment. However, as :binary:`~bin.mongos` routers communicate -frequently with your config servers, monitor config server performance -closely as you increase the number of routers. If you see performance -degradation, it may be beneficial to cap the number of +have in a deployment. However, as :binary:`~bin.mongos` routers +communicate frequently with your config servers, monitor config server +performance closely as you increase the number of routers. If you see +performance degradation, it may be beneficial to cap the number of :binary:`~bin.mongos` routers in your deployment. .. include:: /images/sharded-cluster-production-architecture.rst