Skip to content

Commit c9d64ba

Browse files
authored
YARN-11007. Correct words in YARN documents (#3680)
Reviewed-by: cxorm <[email protected]> Signed-off-by: Akira Ajisaka <[email protected]>
1 parent 9c887e5 commit c9d64ba

File tree

4 files changed

+7
-7
lines changed

4 files changed

+7
-7
lines changed

hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/WritingYarnApplications.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -282,7 +282,7 @@ ApplicationReport report = yarnClient.getApplicationReport(appId);
282282
283283
>> * *Application tracking information*: If the application supports some form of progress tracking, it can set a tracking url which is available via `ApplicationReport`'s `getTrackingUrl()` method that a client can look at to monitor progress.
284284
285-
>> * *Application status*: The state of the application as seen by the ResourceManager is available via `ApplicationReport#getYarnApplicationState`. If the `YarnApplicationState` is set to `FINISHED`, the client should refer to `ApplicationReport#getFinalApplicationStatus` to check for the actual success/failure of the application task itself. In case of failures, `ApplicationReport#getDiagnostics` may be useful to shed some more light on the the failure.
285+
>> * *Application status*: The state of the application as seen by the ResourceManager is available via `ApplicationReport#getYarnApplicationState`. If the `YarnApplicationState` is set to `FINISHED`, the client should refer to `ApplicationReport#getFinalApplicationStatus` to check for the actual success/failure of the application task itself. In case of failures, `ApplicationReport#getDiagnostics` may be useful to shed some more light on the failure.
286286
287287
> * If the ApplicationMaster supports it, a client can directly query the AM itself for progress updates via the host:rpcport information obtained from the application report. It can also use the tracking url obtained from the report if available.
288288
@@ -416,7 +416,7 @@ private ContainerRequest setupContainerAskForRM() {
416416
}
417417
```
418418

419-
* After container allocation requests have been sent by the application manager, contailers will be launched asynchronously, by the event handler of the `AMRMClientAsync` client. The handler should implement `AMRMClientAsync.CallbackHandler` interface.
419+
* After container allocation requests have been sent by the application manager, containers will be launched asynchronously, by the event handler of the `AMRMClientAsync` client. The handler should implement `AMRMClientAsync.CallbackHandler` interface.
420420

421421
> * When there are containers allocated, the handler sets up a thread that runs the code to launch containers. Here we use the name `LaunchContainerRunnable` to demonstrate. We will talk about the `LaunchContainerRunnable` class in the following part of this article.
422422
@@ -556,7 +556,7 @@ The `ApplicationAttemptId` will be passed to the AM via the environment and the
556556

557557
### Why my container is killed by the NodeManager?
558558

559-
This is likely due to high memory usage exceeding your requested container memory size. There are a number of reasons that can cause this. First, look at the process tree that the NodeManager dumps when it kills your container. The two things you're interested in are physical memory and virtual memory. If you have exceeded physical memory limits your app is using too much physical memory. If you're running a Java app, you can use -hprof to look at what is taking up space in the heap. If you have exceeded virtual memory, you may need to increase the value of the the cluster-wide configuration variable `yarn.nodemanager.vmem-pmem-ratio`.
559+
This is likely due to high memory usage exceeding your requested container memory size. There are a number of reasons that can cause this. First, look at the process tree that the NodeManager dumps when it kills your container. The two things you're interested in are physical memory and virtual memory. If you have exceeded physical memory limits your app is using too much physical memory. If you're running a Java app, you can use -hprof to look at what is taking up space in the heap. If you have exceeded virtual memory, you may need to increase the value of the cluster-wide configuration variable `yarn.nodemanager.vmem-pmem-ratio`.
560560

561561
### How do I include native libraries?
562562

hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/YARN.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -33,7 +33,7 @@ The ApplicationsManager is responsible for accepting job-submissions, negotiatin
3333

3434
MapReduce in hadoop-2.x maintains **API compatibility** with previous stable release (hadoop-1.x). This means that all MapReduce jobs should still run unchanged on top of YARN with just a recompile.
3535

36-
YARN supports the notion of **resource reservation** via the [ReservationSystem](./ReservationSystem.html), a component that allows users to specify a profile of resources over-time and temporal constraints (e.g., deadlines), and reserve resources to ensure the predictable execution of important jobs.The *ReservationSystem* tracks resources over-time, performs admission control for reservations, and dynamically instruct the underlying scheduler to ensure that the reservation is fullfilled.
36+
YARN supports the notion of **resource reservation** via the [ReservationSystem](./ReservationSystem.html), a component that allows users to specify a profile of resources over-time and temporal constraints (e.g., deadlines), and reserve resources to ensure the predictable execution of important jobs.The *ReservationSystem* tracks resources over-time, performs admission control for reservations, and dynamically instruct the underlying scheduler to ensure that the reservation is fulfilled.
3737

3838
In order to scale YARN beyond few thousands nodes, YARN supports the notion of **Federation** via the [YARN Federation](./Federation.html) feature. Federation allows to transparently wire together multiple yarn (sub-)clusters, and
3939
make them appear as a single massive cluster. This can be used to achieve larger scale, and/or to allow multiple independent clusters to be used together for very large jobs, or for tenants who have capacity across all of them.

hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/YarnApplicationSecurity.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -175,7 +175,7 @@ More precisely
175175
1. The token passed by the RM to the NM for localization is refreshed/updated as needed.
176176
1. Tokens in the app launch context for use by the application are *not* refreshed.
177177
That is, if it has an out of date HDFS token —that token is not renewed. This
178-
also holds for tokens for for Hive, HBase, etc.
178+
also holds for tokens for Hive, HBase, etc.
179179
1. Therefore, to survive AM restart after token expiry, your AM has to get the
180180
NMs to localize the keytab or make no HDFS accesses until (somehow) a new token has been passed to them from a client.
181181

@@ -546,7 +546,7 @@ the list of resources to localize.
546546
is readable by principals other than the current user, warn,
547547
and consider actually failing the launch (similar to the normal `ssh` application.)
548548

549-
`[ ]` Client acquires HDFS delegation token and and attaches to the AM Container
549+
`[ ]` Client acquires HDFS delegation token and attaches to the AM Container
550550
Launch Context,
551551

552552
`[ ]` AM logs in as principal in keytab via `loginUserFromKeytab()`.

hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/YarnCommands.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -254,7 +254,7 @@ Usage:
254254
| -directlyAccessNodeLabelStore | This is DEPRECATED, will be removed in future releases. Directly access node label store, with this option, all node label related operations will not connect RM. Instead, they will access/modify stored node labels directly. By default, it is false (access via RM). AND PLEASE NOTE: if you configured yarn.node-labels.fs-store.root-dir to a local directory (instead of NFS or HDFS), this option will only work when the command run on the machine where RM is running. |
255255
| -refreshClusterMaxPriority | Refresh cluster max priority |
256256
| -updateNodeResource [NodeID] [MemSize] [vCores] \([OvercommitTimeout]\) | Update resource on specific node. |
257-
| -updateNodeResource [NodeID] [ResourceTypes] \([OvercommitTimeout]\) | Update resource types on specific node. Resource Types is comma-delimited key value pairs of any resources availale at Resource Manager. For example, memory-mb=1024Mi,vcores=1,resource1=2G,resource2=4m|
257+
| -updateNodeResource [NodeID] [ResourceTypes] \([OvercommitTimeout]\) | Update resource types on specific node. Resource Types is comma-delimited key value pairs of any resources available at Resource Manager. For example, memory-mb=1024Mi,vcores=1,resource1=2G,resource2=4m|
258258
| -transitionToActive [--forceactive] [--forcemanual] \<serviceId\> | Transitions the service into Active state. Try to make the target active without checking that there is no active node if the --forceactive option is used. This command can not be used if automatic failover is enabled. Though you can override this by --forcemanual option, you need caution. This command can not be used if automatic failover is enabled.|
259259
| -transitionToStandby [--forcemanual] \<serviceId\> | Transitions the service into Standby state. This command can not be used if automatic failover is enabled. Though you can override this by --forcemanual option, you need caution. |
260260
| -getServiceState \<serviceId\> | Returns the state of the service. |

0 commit comments

Comments
 (0)