Skip to content

Commit 17eda4c

Browse files
committed
Merge pull request apache#175 from falaki/docfix
Minor documentation cleanup
2 parents 0981dff + ba2b72b commit 17eda4c

File tree

1 file changed

+5
-10
lines changed

1 file changed

+5
-10
lines changed

README.md

Lines changed: 5 additions & 10 deletions
Original file line numberDiff line numberDiff line change
@@ -13,12 +13,6 @@ SparkR requires Scala 2.10 and Spark version >= 0.9.0. Current build by default
1313
Apache Spark 1.1.0. You can also build SparkR against a
1414
different Spark version (>= 0.9.0) by modifying `pkg/src/build.sbt`.
1515

16-
SparkR also requires the R package `rJava` to be installed. To install `rJava`,
17-
you can run the following command in R:
18-
19-
install.packages("rJava")
20-
21-
2216
### Package installation
2317
To develop SparkR, you can build the scala package and the R package using
2418

@@ -31,9 +25,9 @@ If you wish to try out the package directly from github, you can use [`install_g
3125

3226
SparkR by default uses Apache Spark 1.1.0. You can switch to a different Spark
3327
version by setting the environment variable `SPARK_VERSION`. For example, to
34-
use Apache Spark 1.2.0, you can run
28+
use Apache Spark 1.3.0, you can run
3529

36-
SPARK_VERSION=1.2.0 ./install-dev.sh
30+
SPARK_VERSION=1.3.0 ./install-dev.sh
3731

3832
SparkR by default links to Hadoop 1.0.4. To use SparkR with other Hadoop
3933
versions, you will need to rebuild SparkR with the same version that [Spark is
@@ -97,8 +91,9 @@ To run one of them, use `./sparkR <filename> <args>`. For example:
9791

9892
./sparkR examples/pi.R local[2]
9993

100-
You can also run the unit-tests for SparkR by running
94+
You can also run the unit-tests for SparkR by running (you need to install the [testthat](http://cran.r-project.org/web/packages/testthat/index.html) package first):
10195

96+
R -e 'install.packages("testthat", repos="http://cran.us.r-project.org")'
10297
./run-tests.sh
10398

10499
## Running on EC2
@@ -110,7 +105,7 @@ Instructions for running SparkR on EC2 can be found in the
110105
Currently, SparkR supports running on YARN with the `yarn-client` mode. These steps show how to build SparkR with YARN support and run SparkR programs on a YARN cluster:
111106

112107
```
113-
# assumes Java, R, rJava, yarn, spark etc. are installed on the whole cluster.
108+
# assumes Java, R, yarn, spark etc. are installed on the whole cluster.
114109
cd SparkR-pkg/
115110
USE_YARN=1 SPARK_YARN_VERSION=2.4.0 SPARK_HADOOP_VERSION=2.4.0 ./install-dev.sh
116111
```

0 commit comments

Comments
 (0)