@@ -46,32 +46,33 @@ for a production sharded cluster deployment:
46
46
Where possible, consider deploying one member of each replica set
47
47
in a site suitable for being a disaster recovery location.
48
48
49
- Sharding requires at least two shards to distribute sharded data. Single
50
- shard sharded clusters may be useful if you plan on enabling sharding in the
51
- near future, but do not need to at the time of deployment.
52
-
53
- Deploying multiple :binary:`~bin.mongos` routers supports high availability
54
- and scalability. A common pattern is to place a :binary:`~bin.mongos` on
55
- each application server. Deploying one :binary:`~bin.mongos` router on each
56
- application server reduces network latency between the application and
57
- the router.
58
-
59
- Alternatively, you can place a :binary:`~bin.mongos` router on each shard
60
- primary. This approach also reduces network latency between the
61
- application and the router: applications use a :doc:`connection
62
- string </reference/connection-string>` listing all the hostnames of each shard primary. The MongoDB
63
- driver then determines the network latency for each :binary:`~bin.mongos`
64
- and load balances randomly across the routers that fall within a set
65
- :ref:`latency window <selection-discovery-options>`. Ensure that the
66
- server hosting the shard primary and :binary:`~bin.mongos` router has
67
- sufficient capacity to accommodate the extra CPU and memory
68
- requirements.
49
+ Sharding requires at least two shards to distribute sharded data. Single
50
+ shard sharded clusters may be useful if you plan on enabling sharding in
51
+ the near future, but do not need to at the time of deployment.
52
+
53
+ Deploying multiple :binary:`~bin.mongos` routers supports high
54
+ availability and scalability. A common pattern is to place a
55
+ :binary:`~bin.mongos` on each application server. Deploying one
56
+ :binary:`~bin.mongos` router on each application server reduces network
57
+ latency between the application and the router.
58
+
59
+ Alternatively, you can place a :binary:`~bin.mongos` router on dedicated
60
+ hosts. Large deployments benefit from this approach because it decouples
61
+ the number of client application servers from the number of
62
+ :binary:`~bin.mongos` instances. This gives greater control over the number
63
+ of connections the :binary:`~bin.mongod` instances serve.
64
+
65
+ Installing :binary:`~bin.mongos` instances on their own hosts allows these
66
+ instances to use greater amounts of memory. Memory would not be shared
67
+ with a :binary:`~bin.mongod` instance. It is possible to use primary shards
68
+ to host :binary:`~bin.mongos` routers but be aware that memory contention may
69
+ become an issue on large deployments.
69
70
70
71
There is no limit to the number of :binary:`~bin.mongos` routers you can
71
- have in a deployment. However, as :binary:`~bin.mongos` routers communicate
72
- frequently with your config servers, monitor config server performance
73
- closely as you increase the number of routers. If you see performance
74
- degradation, it may be beneficial to cap the number of
72
+ have in a deployment. However, as :binary:`~bin.mongos` routers
73
+ communicate frequently with your config servers, monitor config server
74
+ performance closely as you increase the number of routers. If you see
75
+ performance degradation, it may be beneficial to cap the number of
75
76
:binary:`~bin.mongos` routers in your deployment.
76
77
77
78
.. include:: /images/sharded-cluster-production-architecture.rst
0 commit comments