Skip to content

Commit aafabc6

Browse files
authored
DOCSP-33956: atomic typos fy2024-q4 (#148)
* DOCSP-33956: atomic typos fy2024-q4 * vale need to fix * vale fixes * CC suggestions * left over fixes
1 parent 3555590 commit aafabc6

File tree

15 files changed

+40
-37
lines changed

15 files changed

+40
-37
lines changed

source/introduction/install.txt

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -97,7 +97,7 @@ You can download the connector source and JAR files from the following locations
9797
- `mongo-kafka-connect <https://search.maven.org/artifact/org.mongodb.kafka/mongo-kafka-connect>`__
9898

9999
You can identify the contents of the JAR files by the suffix in the
100-
filename. Consult the following table for a description of each suffix:
100+
file name. Consult the following table for a description of each suffix:
101101

102102
.. list-table::
103103
:widths: 25 75

source/introduction/kafka-connect.txt

Lines changed: 4 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -69,7 +69,7 @@ For more information on Kafka Connect, see the following resources:
6969
reliable pipeline.
7070
- There are a large number of community maintained connectors for connecting
7171
Apache Kafka to popular datastores like MongoDB, PostgreSQL, and MySQL using the
72-
Kafka Connect framework. This reduces the amount of boilerplate code you need to
72+
Kafka Connect framework. This reduces the amount of boilerplate code you must
7373
write and maintain to manage database connections, error handling,
7474
dead letter queue integration, and other problems involved in connecting Apache Kafka
7575
with a datastore.
@@ -85,5 +85,6 @@ cluster as a data source, and a MongoDB cluster as a data sink.
8585
.. figure:: /includes/figures/connect-data-flow.png
8686
:alt: Dataflow diagram of Kafka Connect deployment.
8787

88-
All connectors and datastores in the example pipeline are optional, and you can
89-
swap them out for the connectors and datastores you need for your deployment.
88+
All connectors and datastores in the example pipeline are optional.
89+
You can replace them with the connectors and datastores you need
90+
for your deployment.

source/monitoring.txt

Lines changed: 15 additions & 15 deletions
Original file line numberDiff line numberDiff line change
@@ -41,7 +41,7 @@ to satisfy those use cases.
4141
.. tip:: Computed Values
4242

4343
To learn what types of metrics the connector provides and when
44-
you must implement logic to compute a value, see
44+
to implement logic to compute a value, see
4545
:ref:`<kafka-monitoring-types-of-metrics>`.
4646

4747
Sink Connector
@@ -58,22 +58,22 @@ to satisfy those use cases:
5858
* - Use Case
5959
- Metrics to Use
6060

61-
* - You need to know if a component of your pipeline is falling behind.
61+
* - You want to know if a component of your pipeline is falling behind.
6262
- Use the ``latest-kafka-time-difference-ms``
6363
metric. This metric indicates the interval of time between
6464
when a record arrived in a Kafka topic and when your connector
6565
received that record. If the value of this metric is increasing,
6666
it signals that there may be a problem with {+kafka+} or MongoDB.
6767

68-
* - You need to know the total number of records your connector
68+
* - You want to know the total number of records your connector
6969
wrote to MongoDB.
7070
- Use the ``records`` metric.
7171

72-
* - You need to know the total number of write errors your connector
72+
* - You want to know the total number of write errors your connector
7373
encountered when attempting to write to MongoDB.
7474
- Use the ``batch-writes-failed`` metric.
7575

76-
* - You need to know if your MongoDB performance is getting slower
76+
* - You want to know if your MongoDB performance is getting slower
7777
over time.
7878
- Use the ``in-task-put-duration-ms`` metric to initially diagnose
7979
a slowdown.
@@ -84,8 +84,8 @@ to satisfy those use cases:
8484
- ``batch-writes-failed-duration-over-<number>-ms``
8585
- ``processing-phase-duration-over-<number>-ms``
8686

87-
* - You need to find a bottleneck in how {+kafka-connect+} and your MongoDB sink
88-
connector write {+kafka+} records to MongoDB.
87+
* - You want to find the time {+kafka-connect+} and the MongoDB sink
88+
connector spend writing records to MongoDB.
8989
- Compare the values of the following metrics:
9090

9191
- ``in-task-put-duration-ms``
@@ -108,17 +108,17 @@ to satisfy those use cases:
108108
* - Use Case
109109
- Metrics to Use
110110

111-
* - You need to know if a component of your pipeline is falling behind.
111+
* - You want to know if a component of your pipeline is falling behind.
112112
- Use the ``latest-mongodb-time-difference-secs``
113113
metric. This metric indicates how old the most recent change
114114
stream event your connector processed is. If this metric is increasing,
115115
it signals that there may be a problem with {+kafka+} or MongoDB.
116116

117-
* - You need to know the total number of change stream events your source connector
117+
* - You want to know the total number of change stream events your source connector
118118
has processed.
119119
- Use the ``records`` metric.
120120

121-
* - You need to know the percentage of records your connector
121+
* - You want to know the percentage of records your connector
122122
received but failed to write to {+kafka+}.
123123
- Perform the following calculation with the ``records``,
124124
``records-filtered``, and ``records-acknowledged`` metrics:
@@ -127,7 +127,7 @@ to satisfy those use cases:
127127

128128
(records - (records-acknowledged + records-filtered)) / records
129129

130-
* - You need to know the average size of the documents your connector
130+
* - You want to know the average size of the documents your connector
131131
has processed.
132132
- Perform the following calculation with the ``mongodb-bytes-read`` and
133133
``records`` metrics:
@@ -139,14 +139,14 @@ to satisfy those use cases:
139139
To learn how to calculate the average size of records over a span of
140140
time, see :ref:`mongodb-bytes-read <kafka-monitoring-averge-record-size-span>`.
141141

142-
* - You need to find a bottleneck in how {+kafka-connect+} and your MongoDB source
143-
connector write MongoDB documents to {+kafka+}.
142+
* - You want to find the time {+kafka-connect+} and the MongoDB
143+
source connector spend writing records to {+kafka+}.
144144
- Compare the values of the following metrics:
145145

146146
- ``in-task-poll-duration-ms``
147147
- ``in-connect-framework-duration-ms``
148148

149-
* - You need to know if your MongoDB performance is getting slower
149+
* - You want to know if your MongoDB performance is getting slower
150150
over time.
151151
- Use the ``in-task-poll-duration-ms`` metric to initially diagnose
152152
a slowdown.
@@ -220,7 +220,7 @@ types of quantities:
220220
- The value related to the most recent occurrence of an event
221221

222222
For some use cases, you must perform extra computations with the
223-
metrics the connector provides. For example, you must compute the
223+
metrics the connector provides. For example, you can compute the
224224
following values from provided metrics:
225225

226226
- The rate of change of a metric

source/quick-start.txt

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -22,7 +22,7 @@ Kafka topic, and to read data from a Kafka topic and write it to MongoDB.
2222

2323
To complete the steps in this guide, you must download and work in a
2424
**sandbox**, a containerized development environment that includes services
25-
you need to build a sample *data pipeline*.
25+
required to build a sample *data pipeline*.
2626

2727
Read the following sections to set up your sandbox and sample data pipeline.
2828

source/security-and-authentication/mongodb-aws-auth.txt

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -24,7 +24,7 @@ AWS IAM credentials, see the guide on :atlas:`How to Set Up Unified AWS Access <
2424

2525
.. important::
2626

27-
You need to use {+connector+} version 1.5 of later to connect to a MongoDB
27+
You must use {+connector+} version 1.5 of later to connect to a MongoDB
2828
server set up to authenticate using your AWS IAM credentials. AWS IAM
2929
credential authentication is available in MongoDB server version 4.4
3030
and later.
@@ -39,7 +39,7 @@ connection URI connector property as shown in the following example:
3939

4040
connection.uri=mongodb://<AWS access key id>:<AWS secret access key>@<hostname>:<port>/?authSource=<authentication database>&authMechanism=MONGODB-AWS&authMechanismProperties=AWS_SESSION_TOKEN:<AWS session token>
4141

42-
The preceding example uses the following placeholders which you need to
42+
The preceding example uses the following placeholders which you must
4343
replace:
4444

4545
.. list-table::

source/sink-connector/fundamentals/write-strategies.txt

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -326,7 +326,7 @@ business key, perform the following tasks:
326326
#. Specify the ``DeleteOneBusinessKeyStrategy`` write model strategy in the
327327
connector configuration.
328328

329-
Suppose you need to delete a calendar event from a specific year from
329+
Suppose you want to delete a calendar event from a specific year from
330330
a collection that contains a document that resembles the following:
331331

332332
.. _delete-one-business-key-sample-document:

source/source-connector/configuration-properties/startup.txt

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -75,7 +75,8 @@ Settings
7575
| **Description:**
7676
| Actuated only if ``startup.mode=timestamp``. Specifies the
7777
starting point for the change stream. To learn more about
78-
Change Stream parameters, see the :rapid:`Server manual entry </reference/operator/aggregation/changeStream/>`.
78+
Change Stream parameters, see the :manual:`Server manual entry
79+
</reference/operator/aggregation/changeStream/>`.
7980
| **Default**: ``""``
8081
| **Accepted Values**:
8182

source/source-connector/fundamentals/change-streams.txt

Lines changed: 4 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -45,8 +45,9 @@ To learn more about the oplog, see the MongoDB manual entry on the
4545
Aggregation
4646
~~~~~~~~~~~
4747

48-
Use an aggregation pipeline to configure your source connector's change stream.
49-
Some of the ways you can configure your connector's change stream are as follows:
48+
Use an aggregation pipeline to configure your source connector's change
49+
stream. You can configure a connector change stream to use an
50+
aggregation pipeline to perform tasks including the following operations:
5051

5152
- Filter change events by operation type
5253
- Project specific fields
@@ -87,7 +88,7 @@ The oplog is a special capped collection which cannot use indexes. For more
8788
information on this limitation, see
8889
:manual:`Change Streams Production Recommendations </administration/change-streams-production-recommendations/#indexes>`.
8990

90-
If you need to improve change stream performance, use a faster disk for
91+
If you want to improve change stream performance, use a faster disk for
9192
your MongoDB cluster and increase the size of your WiredTiger cache. To
9293
learn how to set your WiredTiger cache, see the guide on the
9394
:manual:`WiredTiger Storage Engine </core/wiredtiger/#memory-use>`.

source/source-connector/usage-examples/copy-existing-data.txt

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -10,7 +10,7 @@ This usage example demonstrates how to copy data from a MongoDB collection to an
1010
Example
1111
-------
1212

13-
Suppose you need to copy a MongoDB collection to {+kafka+} and filter some of the data.
13+
Suppose you want to copy a MongoDB collection to {+kafka+} and filter some data.
1414

1515
Your requirements and your solutions are as follows:
1616

source/source-connector/usage-examples/custom-pipeline.txt

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -19,7 +19,7 @@ For more information, see the MongoDB Server manual entry on
1919
Example
2020
-------
2121

22-
Suppose you're an event coordinator who needs to collect names and arrival times
22+
Suppose you are coordinating an event and want to collect names and arrival times
2323
of each guest at a specific event. Whenever a guest checks into the event,
2424
an application inserts a new document that contains the following details:
2525

source/source-connector/usage-examples/schema.txt

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -7,7 +7,7 @@ Specify a Schema
77
This usage example demonstrates how you can configure your {+source-connector+}
88
to apply a custom **schema** to your data. A schema is a
99
definition that specifies the structure and type information about data in an
10-
{+kafka+} topic. Use a schema when you need to ensure the data on the topic populated
10+
{+kafka+} topic. Use a schema when you must ensure the data on the topic populated
1111
by your source connector has a consistent structure.
1212

1313
To learn more about using schemas with the connector, see the
@@ -17,7 +17,7 @@ Example
1717
-------
1818

1919
Suppose your application keeps track of customer data in a MongoDB
20-
collection, and you need to publish this data to a Kafka topic. You want
20+
collection, and you want to publish this data to a Kafka topic. You want
2121
the subscribers of the customer data to receive consistently formatted data.
2222
You choose to apply a schema to your data.
2323

source/tutorials/migrate-time-series.txt

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -19,7 +19,7 @@ data consists of measurements taken at time intervals, metadata that describes
1919
the measurement, and the time of the measurement.
2020

2121
To convert data from a MongoDB collection to a time series collection using
22-
the connector, you need to perform the following tasks:
22+
the connector, you must perform the following tasks:
2323

2424
#. Identify the time field common to all documents in the collection.
2525
#. Configure a source connector to copy the existing collection data to a

source/tutorials/replicate-with-cdc.txt

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -16,7 +16,7 @@ Overview
1616
Follow this tutorial to learn how to use a
1717
**change data capture (CDC) handler** to replicate data with the {+connector+}.
1818
A CDC handler is an application that translates CDC events into MongoDB
19-
write operations. Use a CDC handler when you need to reproduce the changes
19+
write operations. Use a CDC handler when you must reproduce the changes
2020
in one datastore into another datastore.
2121

2222
In this tutorial, you configure and run MongoDB Kafka source and sink

source/tutorials/tutorial-setup.txt

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -5,7 +5,7 @@ Kafka Connector Tutorial Setup
55
==============================
66

77
The tutorials in this section run on a development environment using Docker to
8-
package the dependencies and configurations you need to run the
8+
package the dependencies and configurations required to run the
99
{+connector-long+}. Make sure you complete the development environment setup
1010
steps before proceeding to the tutorials.
1111

@@ -102,8 +102,8 @@ Set Up Your Development Environment with Docker
102102

103103
.. step:: Verify the Successful Setup
104104

105-
Confirm the development environment started normally by running the
106-
following commands:
105+
Confirm that the development environment started successfully by
106+
running the following commands:
107107

108108
.. code-block:: bash
109109

source/whats-new.txt

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -236,7 +236,7 @@ What's New in 1.6
236236
and :ref:`source connector <source-configuration-error-handling>`
237237
that can override the {+kafka-connect+} framework's error handling behavior
238238
- Added ``mongo-kafka-connect-<version>-confluent.jar``, which contains
239-
the connector and all dependencies needed to run it on the Confluent Platform
239+
the connector and all dependencies required to run it on the Confluent Platform
240240

241241
Sink Connector
242242
~~~~~~~~~~~~~~

0 commit comments

Comments
 (0)