Skip to content

Commit 474fa80

Browse files
authored
HADOOP-17277. Correct spelling errors for separator (#2322)
Contributed by Hui Fei.
1 parent dfc2682 commit 474fa80

File tree

9 files changed

+17
-17
lines changed

9 files changed

+17
-17
lines changed

hadoop-common-project/hadoop-common/src/site/markdown/CommandsManual.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -60,7 +60,7 @@ Many subcommands honor a common set of configuration options to alter their beha
6060
| `-files <comma separated list of files> ` | Specify comma separated files to be copied to the map reduce cluster. Applies only to job. |
6161
| `-fs <file:///> or <hdfs://namenode:port>` | Specify default filesystem URL to use. Overrides 'fs.defaultFS' property from configurations. |
6262
| `-jt <local> or <resourcemanager:port>` | Specify a ResourceManager. Applies only to job. |
63-
| `-libjars <comma seperated list of jars> ` | Specify comma separated jar files to include in the classpath. Applies only to job. |
63+
| `-libjars <comma separated list of jars> ` | Specify comma separated jar files to include in the classpath. Applies only to job. |
6464

6565
Hadoop Common Commands
6666
======================

hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfs-tests/test_libhdfs_mini_stress.c

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -279,7 +279,7 @@ static int testHdfsMiniStressImpl(struct tlhThreadInfo *ti)
279279
EXPECT_NONNULL(ti->hdfs);
280280
// Error injection on, some failures are expected in the read path.
281281
// The expectation is that any memory stomps will cascade and cause
282-
// the following test to fail. Ideally RPC errors would be seperated
282+
// the following test to fail. Ideally RPC errors would be separated
283283
// from BlockReader errors (RPC is expected to recover from disconnects).
284284
doTestHdfsMiniStress(ti, 1);
285285
// No error injection

hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfspp/include/hdfspp/uri.h

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -103,7 +103,7 @@ class URI {
103103

104104
std::string str(bool encoded_output=true) const;
105105

106-
// Get a string with each URI field printed on a seperate line
106+
// Get a string with each URI field printed on a separate line
107107
std::string GetDebugString() const;
108108
private:
109109
// These are stored in encoded form

hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfspp/third_party/uriparser2/uriparser2/uriparser/UriFile.c

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -90,7 +90,7 @@ static URI_INLINE int URI_FUNC(FilenameToUriString)(const URI_CHAR * filename,
9090
if ((input[0] == _UT('\0'))
9191
|| (fromUnix && input[0] == _UT('/'))
9292
|| (!fromUnix && input[0] == _UT('\\'))) {
93-
/* Copy text after last seperator */
93+
/* Copy text after last separator */
9494
if (lastSep + 1 < input) {
9595
if (!fromUnix && absolute && (firstSegment == URI_TRUE)) {
9696
/* Quick hack to not convert "C:" to "C%3A" */

hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/utils/ConsistentHashRing.java

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -33,8 +33,8 @@
3333
* or remove nodes, it minimizes the item migration.
3434
*/
3535
public class ConsistentHashRing {
36-
private static final String SEPERATOR = "/";
37-
private static final String VIRTUAL_NODE_FORMAT = "%s" + SEPERATOR + "%d";
36+
private static final String SEPARATOR = "/";
37+
private static final String VIRTUAL_NODE_FORMAT = "%s" + SEPARATOR + "%d";
3838

3939
/** Hash ring. */
4040
private SortedMap<String, String> ring = new TreeMap<String, String>();
@@ -119,7 +119,7 @@ public String getLocation(String item) {
119119
hash = tailMap.isEmpty() ? ring.firstKey() : tailMap.firstKey();
120120
}
121121
String virtualNode = ring.get(hash);
122-
int index = virtualNode.lastIndexOf(SEPERATOR);
122+
int index = virtualNode.lastIndexOf(SEPARATOR);
123123
if (index >= 0) {
124124
return virtualNode.substring(0, index);
125125
} else {

hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/WebHDFS.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -3142,7 +3142,7 @@ See also: [`CREATESNAPSHOT`](#Create_Snapshot), [`DELETESNAPSHOT`](#Delete_Snaps
31423142
| Description | A list of source paths. |
31433143
| Type | String |
31443144
| Default Value | \<empty\> |
3145-
| Valid Values | A list of comma seperated absolute FileSystem paths without scheme and authority. |
3145+
| Valid Values | A list of comma separated absolute FileSystem paths without scheme and authority. |
31463146
| Syntax | Any string. |
31473147

31483148
See also: [`CONCAT`](#Concat_Files)

hadoop-yarn-project/hadoop-yarn/conf/container-executor.cfg

Lines changed: 7 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -8,15 +8,15 @@ feature.tc.enabled=false
88
#[docker]
99
# module.enabled=## enable/disable the module. set to "true" to enable, disabled by default
1010
# docker.binary=/usr/bin/docker
11-
# docker.allowed.capabilities=## comma seperated capabilities that can be granted, e.g CHOWN,DAC_OVERRIDE,FSETID,FOWNER,MKNOD,NET_RAW,SETGID,SETUID,SETFCAP,SETPCAP,NET_BIND_SERVICE,SYS_CHROOT,KILL,AUDIT_WRITE
12-
# docker.allowed.devices=## comma seperated list of devices that can be mounted into a container
13-
# docker.allowed.networks=## comma seperated networks that can be used. e.g bridge,host,none
14-
# docker.allowed.ro-mounts=## comma seperated volumes that can be mounted as read-only
15-
# docker.allowed.rw-mounts=## comma seperate volumes that can be mounted as read-write, add the yarn local and log dirs to this list to run Hadoop jobs
11+
# docker.allowed.capabilities=## comma separated capabilities that can be granted, e.g CHOWN,DAC_OVERRIDE,FSETID,FOWNER,MKNOD,NET_RAW,SETGID,SETUID,SETFCAP,SETPCAP,NET_BIND_SERVICE,SYS_CHROOT,KILL,AUDIT_WRITE
12+
# docker.allowed.devices=## comma separated list of devices that can be mounted into a container
13+
# docker.allowed.networks=## comma separated networks that can be used. e.g bridge,host,none
14+
# docker.allowed.ro-mounts=## comma separated volumes that can be mounted as read-only
15+
# docker.allowed.rw-mounts=## comma separate volumes that can be mounted as read-write, add the yarn local and log dirs to this list to run Hadoop jobs
1616
# docker.privileged-containers.enabled=false
17-
# docker.allowed.volume-drivers=## comma seperated list of allowed volume-drivers
17+
# docker.allowed.volume-drivers=## comma separated list of allowed volume-drivers
1818
# docker.no-new-privileges.enabled=## enable/disable the no-new-privileges flag for docker run. Set to "true" to enable, disabled by default
19-
# docker.allowed.runtimes=## comma seperated runtimes that can be used.
19+
# docker.allowed.runtimes=## comma separated runtimes that can be used.
2020

2121
# The configs below deal with settings for FPGA resource
2222
#[fpga]

hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/DockerContainers.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -284,7 +284,7 @@ are allowed. It contains the following properties:
284284
| `docker.trusted.registries` | Comma separated list of trusted docker registries for running trusted privileged docker containers. By default, no registries are defined. |
285285
| `docker.inspect.max.retries` | Integer value to check docker container readiness. Each inspection is set with 3 seconds delay. Default value of 10 will wait 30 seconds for docker container to become ready before marked as container failed. |
286286
| `docker.no-new-privileges.enabled` | Enable/disable the no-new-privileges flag for docker run. Set to "true" to enable, disabled by default. |
287-
| `docker.allowed.runtimes` | Comma seperated runtimes that containers are allowed to use. By default no runtimes are allowed to be added.|
287+
| `docker.allowed.runtimes` | Comma separated runtimes that containers are allowed to use. By default no runtimes are allowed to be added.|
288288
| `docker.service-mode.enabled` | Set to "true" or "false" to enable or disable docker container service mode. Default value is "false". |
289289

290290
Please note that if you wish to run Docker containers that require access to the YARN local directories, you must add them to the docker.allowed.rw-mounts list.

hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/Federation.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -256,7 +256,7 @@ Optional:
256256
|`yarn.router.submit.retry` | `3` | The number of retries in the router before we give up. |
257257
|`yarn.federation.statestore.max-connections` | `10` | This is the maximum number of parallel connections each Router makes to the state-store. |
258258
|`yarn.federation.cache-ttl.secs` | `60` | The Router caches informations, and this is the time to leave before the cache is invalidated. |
259-
|`yarn.router.webapp.interceptor-class.pipeline` | `org.apache.hadoop.yarn.server.router.webapp.FederationInterceptorREST` | A comma-seperated list of interceptor classes to be run at the router when interfacing with the client via REST interface. The last step of this pipeline must be the Federation Interceptor REST. |
259+
|`yarn.router.webapp.interceptor-class.pipeline` | `org.apache.hadoop.yarn.server.router.webapp.FederationInterceptorREST` | A comma-separated list of interceptor classes to be run at the router when interfacing with the client via REST interface. The last step of this pipeline must be the Federation Interceptor REST. |
260260

261261
###ON NMs:
262262

0 commit comments

Comments
 (0)