Skip to content

Commit cb3be88

Browse files
committed
Various doc fixes (broken link, format etc.)
1 parent e837cde commit cb3be88

File tree

2 files changed

+10
-10
lines changed

2 files changed

+10
-10
lines changed

docs/security.md

Lines changed: 6 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -39,22 +39,22 @@ configure those ports.
3939
<td>Standalone Master</td>
4040
<td>8080</td>
4141
<td>Web UI</td>
42-
<td><code>spark.master.ui.port<br>SPARK_MASTER_WEBUI_PORT</code></td>
42+
<td><code>spark.master.ui.port /<br> SPARK_MASTER_WEBUI_PORT</code></td>
4343
<td>Jetty-based. Standalone mode only.</td>
4444
</tr>
4545
<tr>
4646
<td>Browser</td>
4747
<td>Standalone Worker</td>
4848
<td>8081</td>
4949
<td>Web UI</td>
50-
<td><code>spark.worker.ui.port<br>SPARK_WORKER_WEBUI_PORT</code></td>
50+
<td><code>spark.worker.ui.port /<br> SPARK_WORKER_WEBUI_PORT</code></td>
5151
<td>Jetty-based. Standalone mode only.</td>
5252
</tr>
5353
<tr>
54-
<td>Driver<br>Standalone Worker</td>
54+
<td>Driver /<br> Standalone Worker</td>
5555
<td>Standalone Master</td>
5656
<td>7077</td>
57-
<td>Submit job to cluster<br>Join cluster</td>
57+
<td>Submit job to cluster /<br> Join cluster</td>
5858
<td><code>SPARK_MASTER_PORT</code></td>
5959
<td>Akka-based. Set to "0" to choose a port randomly. Standalone mode only.</td>
6060
</tr>
@@ -92,10 +92,10 @@ configure those ports.
9292
<td>Jetty-based</td>
9393
</tr>
9494
<tr>
95-
<td>Executor<br>Standalone Master</td>
95+
<td>Executor /<br> Standalone Master</td>
9696
<td>Driver</td>
9797
<td>(random)</td>
98-
<td>Connect to application<br>Notify executor state changes</td>
98+
<td>Connect to application /<br> Notify executor state changes</td>
9999
<td><code>spark.driver.port</code></td>
100100
<td>Akka-based. Set to "0" to choose a port randomly.</td>
101101
</tr>

docs/spark-standalone.md

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -300,14 +300,14 @@ You can run Spark alongside your existing Hadoop cluster by just launching it as
300300
# Configuring Ports for Network Security
301301

302302
Spark makes heavy use of the network, and some environments have strict requirements for using
303-
tight firewall settings. For a complete list of ports to configure, see the [security page]
304-
(security.html#configuring-ports-for-network-security).
303+
tight firewall settings. For a complete list of ports to configure, see the
304+
[security page](security.html#configuring-ports-for-network-security).
305305

306306
# High Availability
307307

308308
By default, standalone scheduling clusters are resilient to Worker failures (insofar as Spark itself is resilient to losing work by moving it to other workers). However, the scheduler uses a Master to make scheduling decisions, and this (by default) creates a single point of failure: if the Master crashes, no new applications can be created. In order to circumvent this, we have two high availability schemes, detailed below.
309309

310-
## Standby Masters with ZooKeeper
310+
# Standby Masters with ZooKeeper
311311

312312
**Overview**
313313

@@ -347,7 +347,7 @@ There's an important distinction to be made between "registering with a Master"
347347

348348
Due to this property, new Masters can be created at any time, and the only thing you need to worry about is that _new_ applications and Workers can find it to register with in case it becomes the leader. Once registered, you're taken care of.
349349

350-
## Single-Node Recovery with Local File System
350+
# Single-Node Recovery with Local File System
351351

352352
**Overview**
353353

0 commit comments

Comments
 (0)