Skip to content

Commit eff7394

Browse files
committed
Improve the description about Cluster Launch Script in docs/spark-standalone.md
1 parent 7858225 commit eff7394

File tree

1 file changed

+6
-6
lines changed

1 file changed

+6
-6
lines changed

docs/spark-standalone.md

Lines changed: 6 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -62,12 +62,12 @@ Finally, the following configuration options can be passed to the master and wor
6262

6363
# Cluster Launch Scripts
6464

65-
To launch a Spark standalone cluster with the launch scripts, you need to create a file called `conf/slaves` in your Spark directory,
66-
which should contain the hostnames of all the machines where you would like to start Spark workers, one per line. If `conf/slaves`
67-
does not exist, the launch scripts use a list which contains single hostname `localhost`. This can be used for testing.
68-
The master machine must be able to access each of the slave machines via `ssh`. By default, `ssh` is executed in the background for parallel execution for each slave machine.
69-
If you would like to use password authentication instead of password-less(using a private key) for `ssh`, `ssh` does not work well in the background.
70-
To avoid this, you can set a environment variable `SPARK_SSH_FOREGROUND` to something like `yes` or `y` to execute `ssh` in the foreground.
65+
To launch a Spark standalone cluster with the launch scripts, you should create a file called conf/slaves in your Spark directory,
66+
which must contain the hostnames of all the machines where you intend to start Spark workers, one per line.
67+
If conf/slaves does not exist, the launch scripts defaults to a single machine (localhost), which is useful for testing.
68+
Note, the master machine accesses each of the worker machines via ssh. By default, ssh is run in parallel and requires password-less (using a private key) access to be setup.
69+
If you do not have a password-less setup, you can set the environment variable SPARK_SSH_FOREGROUND and serially provide a password for each worker.
70+
7171

7272
Once you've set up this file, you can launch or stop your cluster with the following shell scripts, based on Hadoop's deploy scripts, and available in `SPARK_HOME/bin`:
7373

0 commit comments

Comments
 (0)