Skip to content

Commit d57d77d

Browse files
committed
Add documentation
1 parent 825afa0 commit d57d77d

File tree

1 file changed

+20
-3
lines changed

1 file changed

+20
-3
lines changed

docs/running-on-mesos.md

Lines changed: 20 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -78,6 +78,9 @@ To verify that the Mesos cluster is ready for Spark, navigate to the Mesos maste
7878
To use Mesos from Spark, you need a Spark binary package available in a place accessible by Mesos, and
7979
a Spark driver program configured to connect to Mesos.
8080

81+
Alternatively, you can also install Spark in the same location in all the Mesos slaves, and configure
82+
`spark.mesos.executor.home` (defaults to SPARK_HOME) to point to that location.
83+
8184
## Uploading Spark Package
8285

8386
When Mesos runs a task on a Mesos slave for the first time, that slave must have a Spark binary
@@ -107,7 +110,11 @@ the `make-distribution.sh` script included in a Spark source tarball/checkout.
107110
The Master URLs for Mesos are in the form `mesos://host:5050` for a single-master Mesos
108111
cluster, or `mesos://zk://host:2181` for a multi-master Mesos cluster using ZooKeeper.
109112

110-
The driver also needs some configuration in `spark-env.sh` to interact properly with Mesos:
113+
## Client Mode
114+
115+
In client mode, a Spark Mesos framework is launched directly on the client machine and waits for the driver output.
116+
117+
The driver needs some configuration in `spark-env.sh` to interact properly with Mesos:
111118

112119
1. In `spark-env.sh` set some environment variables:
113120
* `export MESOS_NATIVE_JAVA_LIBRARY=<path to libmesos.so>`. This path is typically
@@ -129,8 +136,7 @@ val sc = new SparkContext(conf)
129136
{% endhighlight %}
130137

131138
(You can also use [`spark-submit`](submitting-applications.html) and configure `spark.executor.uri`
132-
in the [conf/spark-defaults.conf](configuration.html#loading-default-configurations) file. Note
133-
that `spark-submit` currently only supports deploying the Spark driver in `client` mode for Mesos.)
139+
in the [conf/spark-defaults.conf](configuration.html#loading-default-configurations) file.)
134140

135141
When running a shell, the `spark.executor.uri` parameter is inherited from `SPARK_EXECUTOR_URI`, so
136142
it does not need to be redundantly passed in as a system property.
@@ -139,6 +145,17 @@ it does not need to be redundantly passed in as a system property.
139145
./bin/spark-shell --master mesos://host:5050
140146
{% endhighlight %}
141147

148+
## Cluster mode
149+
150+
Spark on Mesos also supports cluster mode, where the driver is launched in the cluster and the client
151+
can find the results of the driver from the Mesos Web UI.
152+
153+
To use cluster mode, you must start the MesosClusterDispatcher in your cluster via the `sbin/start-mesos-dispatcher.sh` script,
154+
passing in the Mesos master url (ie: mesos://host:5050).
155+
156+
From the client, you can submit a job to Mesos cluster by running `spark-submit` and specifying the master url
157+
to the url of the MesosClusterDispatcher (ie: mesos://dispatcher:7077). You can view driver statuses on the
158+
Spark cluster Web UI.
142159

143160
# Mesos Run Modes
144161

0 commit comments

Comments
 (0)