Skip to content

Commit f9f94e7

Browse files
author
Sam Kleinman
committed
DOCS-1148 fixing stale links
1 parent 78b9b07 commit f9f94e7

38 files changed

+246
-356
lines changed

source/administration/journaling.txt

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -180,7 +180,7 @@ journal directory before the server becomes available. If MongoDB must
180180
replay journal files, :program:`mongod` notes these events in the log
181181
output.
182182

183-
There is no reason to run :dbcommand:`repair` in these situations.
183+
There is no reason to run :dbcommand:`repairDatabase` in these situations.
184184

185185
.. _journaling-internals:
186186

source/administration/monitoring.txt

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -303,7 +303,7 @@ performance.
303303
If :data:`globalLock.totalTime <serverStatus.globalLock.totalTime>` is
304304
high in context of :data:`~serverStatus.uptime` then the database has
305305
existed in a lock state for a significant amount of time. If
306-
:data:`globalLock.ratio` is also high, MongoDB has likely been
306+
:data:`globalLock.ratio <serverStatus.globalLock.ratio>` is also high, MongoDB has likely been
307307
processing a large number of long running queries. Long queries are
308308
often the result of a number of factors: ineffective use of indexes,
309309
non-optimal schema design, poor query structure, system architecture

source/administration/replica-set-architectures.txt

Lines changed: 21 additions & 17 deletions
Original file line numberDiff line numberDiff line change
@@ -59,13 +59,15 @@ conditions are true:
5959

6060
- The set has no more than 7 voting members at a time.
6161

62-
- Members that cannot function as primaries in a :term:`failover`
63-
have their :data:`priority <members[n].priority>` values set to ``0``.
62+
- Members that cannot function as primaries in a :term:`failover` have
63+
their :data:`~local.system.replset.members[n].priority` values set to
64+
``0``.
6465

65-
If a member cannot function as a primary because of
66-
resource or network latency constraints a :data:`priority <members[n].priority>` value
67-
of ``0`` prevents it from being a primary. Any member with a
68-
``priority`` value greater than ``0`` is available to be a primary.
66+
If a member cannot function as a primary because of resource or
67+
network latency constraints a
68+
:data:`~local.system.replset.members[n].priority>` value of ``0``
69+
prevents it from being a primary. Any member with a ``priority``
70+
value greater than ``0`` is available to be a primary.
6971

7072
- A majority of the set's members operate in the main data center.
7173

@@ -78,9 +80,10 @@ Geographically Distributed Sets
7880

7981
A geographically distributed replica set provides data recovery should
8082
one data center fail. These sets include at least one member in a
81-
secondary data center. The member has its :data:`priority
82-
<members[n].priority>` :ref:`set <replica-set-reconfiguration-usage>` to
83-
``0`` to prevent the member from ever becoming primary.
83+
secondary data center. The member has its
84+
:data:`~local.system.replset.members[n].priority`
85+
:ref:`set <replica-set-reconfiguration-usage>` to ``0`` to prevent the
86+
member from ever becoming primary.
8487

8588
In many circumstances, these deployments consist of the following:
8689

@@ -91,8 +94,8 @@ In many circumstances, these deployments consist of the following:
9194
This member can become the primary member at any time.
9295

9396
- One secondary member in a secondary data center. This member is
94-
ineligible to become primary. Set its :data:`members[n].priority` to
95-
``0``.
97+
ineligible to become primary. Set its
98+
:data:`local.system.replset.members[n].priority` to ``0``.
9699

97100
If the primary is unavailable, the replica set will elect a new primary
98101
from the primary data center.
@@ -126,10 +129,11 @@ You might create such a member to provide backups, to support reporting
126129
operations, or to act as a cold standby. Such members fall into one or
127130
more of the following categories:
128131

129-
- **Low-Priority**: These members have :data:`members[n].priority`
130-
settings such that they are either unable to become :term:`primary` or
131-
*very* unlikely to become primary. In all other respects these
132-
low-priority members are identical to other replica set member. (See:
132+
- **Low-Priority**: These members
133+
have :data:`local.system.replset.members[n].priority` settings such
134+
that they are either unable to become :term:`primary` or *very*
135+
unlikely to become primary. In all other respects these low-priority
136+
members are identical to other replica set member. (See:
133137
:ref:`replica-set-secondary-only-members`.)
134138

135139
- **Hidden**: These members cannot become primary *and* the set excludes
@@ -204,7 +208,7 @@ receive no traffic beyond what replication requires. While hidden members
204208
are not electable as primary, they are still able to *vote* in elections
205209
for primary. If your operational parameters requires this kind of
206210
reporting functionality, see :ref:`Hidden Replica Set Nodes
207-
<replica-set-hidden-members>` and :data:`members[n].hidden` for more
211+
<replica-set-hidden-members>` and :data:`local.system.replset.members[n].hidden` for more
208212
information regarding this functionality.
209213

210214
Cold Standbys
@@ -233,7 +237,7 @@ primary *and* a quorum of voting members in the main facility.
233237
.. note::
234238

235239
If your set already has ``7`` members, set the
236-
:data:`members[n].votes` value to ``0`` for these members, so that
240+
:data:`local.system.replset.members[n].votes` value to ``0`` for these members, so that
237241
they won't vote in elections.
238242

239243
.. seealso:: :ref:`Secondary Only <replica-set-secondary-only-members>`,

source/administration/replica-sets.txt

Lines changed: 18 additions & 18 deletions
Original file line numberDiff line numberDiff line change
@@ -498,7 +498,7 @@ The first operation uses :method:`rs.conf()` to set the local variable
498498
is a :term:`document`. The next three operations change the
499499
:data:`~local.system.replset.settings.members[n].priority` value in the ``cfg`` document for the
500500
first three members configured in the :data:`members
501-
<rs.conf.members>` array. The final operation
501+
<local.system.replset.members>` array. The final operation
502502
calls :method:`rs.reconfig()` with the argument of ``cfg`` to initialize
503503
the new configuration.
504504

@@ -667,7 +667,7 @@ You can use the following sequence of commands:
667667
cfg.settings.chainingAllowed = true
668668
rs.reconfig(cfg)
669669

670-
.. note::
670+
.. note::
671671

672672
If chained replication is disabled, you still can use
673673
:dbcommand:`replSetSyncFrom` to specify that a secondary replicates
@@ -745,11 +745,11 @@ To resync the stale member:
745745
At this point, the :program:`mongod` will perform an initial
746746
sync. The length of the initial sync may process depends on the
747747
size of the database and network connection between members of the
748-
replica set.
748+
replica set.
749749

750750
Initial sync operations can impact the other members of the set and
751751
create additional traffic to the primary, and can only occur if
752-
another member of the set is accessible and up to date.
752+
another member of the set is accessible and up to date.
753753

754754
.. index:: replica set; resync
755755
.. _replica-set-resync-by-copying:
@@ -758,14 +758,14 @@ Resync by Copying All Datafiles from Another Member
758758
```````````````````````````````````````````````````
759759

760760
This approach uses a copy of the data files from an existing member of
761-
the replica set, or a back of the data files to "seed" the stale member.
761+
the replica set, or a back of the data files to "seed" the stale member.
762762

763763
The copy or backup of the data files **must** be sufficiently recent
764764
to allow the new member to catch up with the :term:`oplog`, otherwise
765765
the member would need to perform an initial sync.
766766

767-
.. note::
768-
767+
.. note::
768+
769769
In most cases you cannot copy data files from a running
770770
:program:`mongod` instance to another, because the data files will
771771
change during the file copy operation. Consider the
@@ -795,12 +795,12 @@ Additionally, MongoDB provides an authentication mechanism for
795795
replica sets. These instances enable authentication but specify a
796796
shared key file that serves as a shared password.
797797

798-
.. versionadded:: 1.8
798+
.. versionadded:: 1.8
799799
Added support authentication in replica set deployments.
800800

801801
.. versionchanged:: 1.9.1
802802
Added support authentication in sharded replica set deployments.
803-
803+
804804

805805
To enable authentication add the following option to your configuration file:
806806

@@ -1155,20 +1155,20 @@ the oplog has the wrong data type in the ``ts`` field.
11551155
.. code-block:: javascript
11561156

11571157
{ "ts" : {t: 1347982456000, i: 1},
1158-
"h" : NumberLong("8191276672478122996"),
1159-
"op" : "n",
1160-
"ns" : "",
1158+
"h" : NumberLong("8191276672478122996"),
1159+
"op" : "n",
1160+
"ns" : "",
11611161
"o" : { "msg" : "Reconfig set", "version" : 4 } }
11621162

11631163
And the second query returns this as the last entry where ``ts``
11641164
has the ``Timestamp`` type:
11651165

11661166
.. code-block:: javascript
11671167

1168-
{ "ts" : Timestamp(1347982454000, 1),
1169-
"h" : NumberLong("6188469075153256465"),
1170-
"op" : "n",
1171-
"ns" : "",
1168+
{ "ts" : Timestamp(1347982454000, 1),
1169+
"h" : NumberLong("6188469075153256465"),
1170+
"op" : "n",
1171+
"ns" : "",
11721172
"o" : { "msg" : "Reconfig set", "version" : 3 } }
11731173

11741174
Then the value for the ``ts`` field in the last oplog entry is of the
@@ -1179,7 +1179,7 @@ use an update operation that resembles the following:
11791179

11801180
.. code-block:: javascript
11811181

1182-
db.oplog.rs.update( { ts: { t:1347982456000, i:1 } },
1182+
db.oplog.rs.update( { ts: { t:1347982456000, i:1 } },
11831183
{ $set: { ts: new Timestamp(1347982456000, 1)}})
11841184

11851185
Modify the timestamp values as needed based on your oplog entry. This
@@ -1232,4 +1232,4 @@ primary and the set will become read only. To avoid this situation,
12321232
attempt to place a majority of instances in one data center with a
12331233
minority of instances in a secondary facility.
12341234

1235-
.. see:: :ref:`replica-set-election-internals`.
1235+
.. see:: :ref:`replica-set-election-internals`.

source/core/data-modeling.txt

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -250,7 +250,7 @@ start.
250250
The :option:`--nssize <mongod --nssize>` sets the size for *new*
251251
``<database>.ns`` files. For existing databases, after starting up the
252252
server with :option:`--nssize <mongod --nssize>`, run the
253-
:dbcommand:`db.repairDatabase()` command from the :program:`mongo`
253+
:method:`db.repairDatabase()` command from the :program:`mongo`
254254
shell.
255255

256256
Indexes

source/core/geospatial-indexes.txt

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -180,9 +180,9 @@ area in order to improve performance for queries limited to that area.
180180
Each bucket in a haystack index contains all the documents within a
181181
specified proximity to a given longitude and latitude. Use the
182182
``bucketSize`` parameter of :method:`ensureIndex()
183-
<db.command.ensureIndex()>` to determine proximity. A ``bucketSize``
184-
of ``5`` creates an index that groups location values that are within
185-
5 units of the specified longitude and latitude.
183+
<db.collection.ensureIndex()>` to determine proximity. A
184+
``bucketSize`` of ``5`` creates an index that groups location values
185+
that are within 5 units of the specified longitude and latitude.
186186

187187
``bucketSize`` also determines the granularity of the index. You can
188188
tune the parameter to the distribution of your data so that in general
@@ -191,7 +191,7 @@ space. Furthermore, the areas defined by buckets can overlap: as a
191191
result a document can exist in multiple buckets.
192192

193193
To build a haystack index, use the ``bucketSize`` parameter in the
194-
:method:`ensureIndex() <db.command.ensureIndex()>` method, as in the
194+
:method:`ensureIndex() <db.collection.ensureIndex()>` method, as in the
195195
following prototype:
196196

197197
.. code-block:: javascript

source/core/object-id.txt

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -47,11 +47,11 @@ and methods:
4747

4848
The hexadecimal string value of the ``ObjectId()`` object.
4949

50-
- :method:`getTimestamp() <ObjectId.getTimestamp()>`
50+
- :method:`~ObjectId.getTimestamp()`
5151

5252
Returns the timestamp portion of the ``ObjectId()`` object as a Date.
5353

54-
- :method:`toString() <ObjectId.toString()>`
54+
- :method:`~ObjectId.toString()`
5555

5656
Returns the string representation of the ``ObjectId()`` object. The
5757
returned string literal has the format "``ObjectId(...)``".
@@ -128,7 +128,7 @@ Consider the following uses ``ObjectId()`` class in the
128128
507f191e810c19729de860ea
129129

130130
- To return the string representation of an ``ObjectId()`` object, use
131-
the :method:`toString() <ObjectId.toString()>` method as follows:
131+
the :method:`~ObjectId.toString()` method as follows:
132132

133133
.. code-block:: javascript
134134

@@ -141,7 +141,7 @@ Consider the following uses ``ObjectId()`` class in the
141141
ObjectId("507f191e810c19729de860ea")
142142

143143
- To return the value of an ``ObjectId()`` object as a hexadecimal
144-
string, use the :method:`valueOf() <ObjectId.valueOf()>` method as
144+
string, use the :method:`~ObjectId.valueOf()` method as
145145
follows:
146146

147147
.. code-block:: javascript

source/core/read-operations.txt

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -875,7 +875,7 @@ Consider the following behaviors related to cursors:
875875

876876
- As you iterate through the cursor and reach the end of the returned
877877
batch, if there are more results, :method:`cursor.next()` will
878-
perform a :data:`getmore operation <currentop.op>` to retrieve the next batch.
878+
perform a :data:`getmore operation <currentOp.op>` to retrieve the next batch.
879879

880880
To see how many documents remain in the batch as you iterate the
881881
cursor, you can use the :method:`~cursor.objsLeftInBatch()` method,
@@ -966,7 +966,7 @@ operations for more basic data aggregation operations:
966966

967967
- :dbcommand:`count` (:method:`~cursor.count()`)
968968

969-
- :dbcommand:`distinct` (:method:`~cursor.distinct()`)
969+
- :dbcommand:`distinct` (:method:`db.collection.distinct()`)
970970

971971
- :dbcommand:`group` (:method:`db.collection.group()`)
972972

source/core/replication.txt

Lines changed: 6 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -141,14 +141,14 @@ Member Priority
141141
In a replica set, every member has a "priority," that helps determine
142142
eligibility for :ref:`election <replica-set-elections>` to
143143
:term:`primary`. By default, all members have a priority of ``1``,
144-
unless you modify the :data:`members[n].priority` value. All members
144+
unless you modify the :data:`~local.system.replset.members[n].priority` value. All members
145145
have a single vote in elections.
146146

147147
.. warning::
148148

149-
Always configure the :data:`members[n].priority` value to control
149+
Always configure the :data:`~local.system.replset.members[n].priority` value to control
150150
which members will become primary. Do not configure
151-
:data:`members[n].votes` except to permit more than 7 secondary
151+
:data:`~local.system.replset.members[n].votes` except to permit more than 7 secondary
152152
members.
153153

154154
For more information on member priorities, see the
@@ -468,8 +468,9 @@ The architecture and design of the :term:`replica set` deployment can
468468
have a great impact on the set's capacity and capability. This section
469469
provides a general overview of the architectural possibilities for
470470
replica set deployments. However, for most production deployments a
471-
conventional 3-member replica set with :data:`members[n].priority`
472-
values of ``1`` are sufficient.
471+
conventional 3-member replica set with
472+
:data:`~local.system.replset.members[n].priority` values of ``1`` are
473+
sufficient.
473474

474475
While the additional flexibility discussed is below helpful for
475476
managing a variety of operational complexities, it always makes sense

source/core/write-operations.txt

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -361,7 +361,7 @@ more efficient than those updates that cause document growth. Use
361361
document growth when possible.
362362

363363
For complete examples of update operations, see
364-
:doc:`/applications/update`.
364+
:doc:`/applications/update`.
365365

366366
.. _write-operations-padding-factor:
367367

@@ -396,7 +396,7 @@ padding for new inserts and moves.
396396
but does not eliminate, document movements.
397397

398398
To check the current :data:`~collStats.paddingFactor` on a collection, you can
399-
run the :dbcommand:`db.collection.stats()` command in the
399+
run the :method:`db.collection.stats()` operation in the
400400
:program:`mongo` shell, as in the following example:
401401

402402
.. code-block:: javascript

0 commit comments

Comments
 (0)