@@ -78,6 +78,9 @@ To verify that the Mesos cluster is ready for Spark, navigate to the Mesos maste
78
78
To use Mesos from Spark, you need a Spark binary package available in a place accessible by Mesos, and
79
79
a Spark driver program configured to connect to Mesos.
80
80
81
+ Alternatively, you can also install Spark in the same location in all the Mesos slaves, and configure
82
+ ` spark.mesos.executor.home ` (defaults to SPARK_HOME) to point to that location.
83
+
81
84
## Uploading Spark Package
82
85
83
86
When Mesos runs a task on a Mesos slave for the first time, that slave must have a Spark binary
@@ -107,7 +110,11 @@ the `make-distribution.sh` script included in a Spark source tarball/checkout.
107
110
The Master URLs for Mesos are in the form ` mesos://host:5050 ` for a single-master Mesos
108
111
cluster, or ` mesos://zk://host:2181 ` for a multi-master Mesos cluster using ZooKeeper.
109
112
110
- The driver also needs some configuration in ` spark-env.sh ` to interact properly with Mesos:
113
+ ## Client Mode
114
+
115
+ In client mode, a Spark Mesos framework is launched directly on the client machine and waits for the driver output.
116
+
117
+ The driver needs some configuration in ` spark-env.sh ` to interact properly with Mesos:
111
118
112
119
1 . In ` spark-env.sh ` set some environment variables:
113
120
* ` export MESOS_NATIVE_JAVA_LIBRARY=<path to libmesos.so> ` . This path is typically
@@ -129,8 +136,7 @@ val sc = new SparkContext(conf)
129
136
{% endhighlight %}
130
137
131
138
(You can also use [ ` spark-submit ` ] ( submitting-applications.html ) and configure ` spark.executor.uri `
132
- in the [ conf/spark-defaults.conf] ( configuration.html#loading-default-configurations ) file. Note
133
- that ` spark-submit ` currently only supports deploying the Spark driver in ` client ` mode for Mesos.)
139
+ in the [ conf/spark-defaults.conf] ( configuration.html#loading-default-configurations ) file.)
134
140
135
141
When running a shell, the ` spark.executor.uri ` parameter is inherited from ` SPARK_EXECUTOR_URI ` , so
136
142
it does not need to be redundantly passed in as a system property.
@@ -139,6 +145,17 @@ it does not need to be redundantly passed in as a system property.
139
145
./bin/spark-shell --master mesos://host:5050
140
146
{% endhighlight %}
141
147
148
+ ## Cluster mode
149
+
150
+ Spark on Mesos also supports cluster mode, where the driver is launched in the cluster and the client
151
+ can find the results of the driver from the Mesos Web UI.
152
+
153
+ To use cluster mode, you must start the MesosClusterDispatcher in your cluster via the ` sbin/start-mesos-dispatcher.sh ` script,
154
+ passing in the Mesos master url (ie: mesos://host:5050).
155
+
156
+ From the client, you can submit a job to Mesos cluster by running ` spark-submit ` and specifying the master url
157
+ to the url of the MesosClusterDispatcher (ie: mesos://dispatcher:7077). You can view driver statuses on the
158
+ Spark cluster Web UI.
142
159
143
160
# Mesos Run Modes
144
161
0 commit comments