Skip to content
This repository was archived by the owner on Jan 9, 2020. It is now read-only.
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion conf/kubernetes-resource-staging-server.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -32,7 +32,7 @@ spec:
name: spark-resource-staging-server-config
containers:
- name: spark-resource-staging-server
image: kubespark/spark-resource-staging-server:v2.1.0-kubernetes-0.2.0
image: kubespark/spark-resource-staging-server:v2.2.0-kubernetes-0.3.0
resources:
requests:
cpu: 100m
Expand Down
8 changes: 4 additions & 4 deletions conf/kubernetes-shuffle-service.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -20,14 +20,14 @@ kind: DaemonSet
metadata:
labels:
app: spark-shuffle-service
spark-version: 2.1.0
spark-version: 2.2.0
name: shuffle
spec:
template:
metadata:
labels:
app: spark-shuffle-service
spark-version: 2.1.0
spark-version: 2.2.0
spec:
volumes:
- name: temp-volume
Expand All @@ -38,7 +38,7 @@ spec:
# This is an official image that is built
# from the dockerfiles/shuffle directory
# in the spark distribution.
image: kubespark/spark-shuffle:v2.1.0-kubernetes-0.2.0
image: kubespark/spark-shuffle:v2.2.0-kubernetes-0.3.0
imagePullPolicy: IfNotPresent
volumeMounts:
- mountPath: '/tmp'
Expand All @@ -51,4 +51,4 @@ spec:
requests:
cpu: "1"
limits:
cpu: "1"
cpu: "1"
34 changes: 17 additions & 17 deletions docs/running-on-kubernetes.md
Original file line number Diff line number Diff line change
Expand Up @@ -36,15 +36,15 @@ If you wish to use pre-built docker images, you may use the images published in
<tr><th>Component</th><th>Image</th></tr>
<tr>
<td>Spark Driver Image</td>
<td><code>kubespark/spark-driver:v2.1.0-kubernetes-0.2.0</code></td>
<td><code>kubespark/spark-driver:v2.2.0-kubernetes-0.3.0</code></td>
</tr>
<tr>
<td>Spark Executor Image</td>
<td><code>kubespark/spark-executor:v2.1.0-kubernetes-0.2.0</code></td>
<td><code>kubespark/spark-executor:v2.2.0-kubernetes-0.3.0</code></td>
</tr>
<tr>
<td>Spark Initialization Image</td>
<td><code>kubespark/spark-init:v2.1.0-kubernetes-0.2.0</code></td>
<td><code>kubespark/spark-init:v2.2.0-kubernetes-0.3.0</code></td>
</tr>
</table>

Expand Down Expand Up @@ -80,9 +80,9 @@ are set up as described above:
--kubernetes-namespace default \
--conf spark.executor.instances=5 \
--conf spark.app.name=spark-pi \
--conf spark.kubernetes.driver.docker.image=kubespark/spark-driver:v2.1.0-kubernetes-0.2.0 \
--conf spark.kubernetes.executor.docker.image=kubespark/spark-executor:v2.1.0-kubernetes-0.2.0 \
--conf spark.kubernetes.initcontainer.docker.image=kubespark/spark-init:v2.1.0-kubernetes-0.2.0 \
--conf spark.kubernetes.driver.docker.image=kubespark/spark-driver:v2.2.0-kubernetes-0.3.0 \
--conf spark.kubernetes.executor.docker.image=kubespark/spark-executor:v2.2.0-kubernetes-0.3.0 \
--conf spark.kubernetes.initcontainer.docker.image=kubespark/spark-init:v2.2.0-kubernetes-0.3.0 \
local:///opt/spark/examples/jars/spark_examples_2.11-2.2.0.jar

The Spark master, specified either via passing the `--master` command line argument to `spark-submit` or by setting
Expand Down Expand Up @@ -129,9 +129,9 @@ and then you can compute the value of Pi as follows:
--kubernetes-namespace default \
--conf spark.executor.instances=5 \
--conf spark.app.name=spark-pi \
--conf spark.kubernetes.driver.docker.image=kubespark/spark-driver:v2.1.0-kubernetes-0.2.0 \
--conf spark.kubernetes.executor.docker.image=kubespark/spark-executor:v2.1.0-kubernetes-0.2.0 \
--conf spark.kubernetes.initcontainer.docker.image=kubespark/spark-init:v2.1.0-kubernetes-0.2.0 \
--conf spark.kubernetes.driver.docker.image=kubespark/spark-driver:v2.2.0-kubernetes-0.3.0 \
--conf spark.kubernetes.executor.docker.image=kubespark/spark-executor:v2.2.0-kubernetes-0.3.0 \
--conf spark.kubernetes.initcontainer.docker.image=kubespark/spark-init:v2.2.0-kubernetes-0.3.0 \
--conf spark.kubernetes.resourceStagingServer.uri=http://<address-of-any-cluster-node>:31000 \
examples/jars/spark_examples_2.11-2.2.0.jar

Expand Down Expand Up @@ -170,9 +170,9 @@ If our local proxy were listening on port 8001, we would have our submission loo
--kubernetes-namespace default \
--conf spark.executor.instances=5 \
--conf spark.app.name=spark-pi \
--conf spark.kubernetes.driver.docker.image=kubespark/spark-driver:v2.1.0-kubernetes-0.2.0 \
--conf spark.kubernetes.executor.docker.image=kubespark/spark-executor:v2.1.0-kubernetes-0.2.0 \
--conf spark.kubernetes.initcontainer.docker.image=kubespark/spark-init:v2.1.0-kubernetes-0.2.0 \
--conf spark.kubernetes.driver.docker.image=kubespark/spark-driver:v2.2.0-kubernetes-0.3.0 \
--conf spark.kubernetes.executor.docker.image=kubespark/spark-executor:v2.2.0-kubernetes-0.3.0 \
--conf spark.kubernetes.initcontainer.docker.image=kubespark/spark-init:v2.2.0-kubernetes-0.3.0 \
local:///opt/spark/examples/jars/spark_examples_2.11-2.2.0.jar

Communication between Spark and Kubernetes clusters is performed using the fabric8 kubernetes-client library.
Expand Down Expand Up @@ -220,7 +220,7 @@ service because there may be multiple shuffle service instances running in a clu
a way to target a particular shuffle service.

For example, if the shuffle service we want to use is in the default namespace, and
has pods with labels `app=spark-shuffle-service` and `spark-version=2.1.0`, we can
has pods with labels `app=spark-shuffle-service` and `spark-version=2.2.0`, we can
use those tags to target that particular shuffle service at job launch time. In order to run a job with dynamic allocation enabled,
the command may then look like the following:

Expand All @@ -235,7 +235,7 @@ the command may then look like the following:
--conf spark.dynamicAllocation.enabled=true \
--conf spark.shuffle.service.enabled=true \
--conf spark.kubernetes.shuffle.namespace=default \
--conf spark.kubernetes.shuffle.labels="app=spark-shuffle-service,spark-version=2.1.0" \
--conf spark.kubernetes.shuffle.labels="app=spark-shuffle-service,spark-version=2.2.0" \
local:///opt/spark/examples/jars/spark_examples_2.11-2.2.0.jar 10 400000 2

## Advanced
Expand Down Expand Up @@ -312,9 +312,9 @@ communicate with the resource staging server over TLS. The trustStore can be set
--kubernetes-namespace default \
--conf spark.executor.instances=5 \
--conf spark.app.name=spark-pi \
--conf spark.kubernetes.driver.docker.image=kubespark/spark-driver:v2.1.0-kubernetes-0.2.0 \
--conf spark.kubernetes.executor.docker.image=kubespark/spark-executor:v2.1.0-kubernetes-0.2.0 \
--conf spark.kubernetes.initcontainer.docker.image=kubespark/spark-init:v2.1.0-kubernetes-0.2.0 \
--conf spark.kubernetes.driver.docker.image=kubespark/spark-driver:v2.2.0-kubernetes-0.3.0 \
--conf spark.kubernetes.executor.docker.image=kubespark/spark-executor:v2.2.0-kubernetes-0.3.0 \
--conf spark.kubernetes.initcontainer.docker.image=kubespark/spark-init:v2.2.0-kubernetes-0.3.0 \
--conf spark.kubernetes.resourceStagingServer.uri=https://<address-of-any-cluster-node>:31000 \
--conf spark.ssl.kubernetes.resourceStagingServer.enabled=true \
--conf spark.ssl.kubernetes.resourceStagingServer.clientCertPem=/home/myuser/cert.pem \
Expand Down