Skip to content

Commit a6a00c2

Browse files
committed
[SPARK-3619] Upgrade to Mesos 0.21 to work around MESOS-1688
- Removed 'Known issues' section
1 parent 0dace7b commit a6a00c2

File tree

1 file changed

+0
-3
lines changed

1 file changed

+0
-3
lines changed

docs/running-on-mesos.md

Lines changed: 0 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -167,9 +167,6 @@ acquire. By default, it will acquire *all* cores in the cluster (that get offere
167167
only makes sense if you run just one application at a time. You can cap the maximum number of cores
168168
using `conf.set("spark.cores.max", "10")` (for example).
169169

170-
# Known issues
171-
- When using the "fine-grained" mode, make sure that your executors always leave 32 MB free on the slaves. Otherwise it can happen that your Spark job does not proceed anymore. Currently, Apache Mesos only offers resources if there are at least 32 MB memory allocatable. But as Spark allocates memory only for the executor and cpu only for tasks, it can happen on high slave memory usage that no new tasks will be started anymore. More details can be found in [MESOS-1688](https://issues.apache.org/jira/browse/MESOS-1688). Alternatively use the "coarse-gained" mode, which is not affected by this issue.
172-
173170
# Running Alongside Hadoop
174171

175172
You can run Spark and Mesos alongside your existing Hadoop cluster by just launching them as a

0 commit comments

Comments
 (0)