Skip to content

Commit 68deb11

Browse files
committed
2 parents 3ee3b2b + 56dae30 commit 68deb11

File tree

2 files changed

+4
-4
lines changed

2 files changed

+4
-4
lines changed

docs/mllib-feature-extraction.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -68,7 +68,7 @@ val sc: SparkContext = ...
6868
val documents: RDD[Seq[String]] = sc.textFile("...").map(_.split(" ").toSeq)
6969

7070
val hashingTF = new HashingTF()
71-
val tf: RDD[Vector] = hasingTF.transform(documents)
71+
val tf: RDD[Vector] = hashingTF.transform(documents)
7272
{% endhighlight %}
7373

7474
While applying `HashingTF` only needs a single pass to the data, applying `IDF` needs two passes:

docs/sql-programming-guide.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -605,7 +605,7 @@ Spark SQL can automatically infer the schema of a JSON dataset and load it as a
605605
This conversion can be done using one of two methods in a SQLContext:
606606

607607
* `jsonFile` - loads data from a directory of JSON files where each line of the files is a JSON object.
608-
* `jsonRdd` - loads data from an existing RDD where each element of the RDD is a string containing a JSON object.
608+
* `jsonRDD` - loads data from an existing RDD where each element of the RDD is a string containing a JSON object.
609609

610610
{% highlight scala %}
611611
// sc is an existing SparkContext.
@@ -643,7 +643,7 @@ Spark SQL can automatically infer the schema of a JSON dataset and load it as a
643643
This conversion can be done using one of two methods in a JavaSQLContext :
644644

645645
* `jsonFile` - loads data from a directory of JSON files where each line of the files is a JSON object.
646-
* `jsonRdd` - loads data from an existing RDD where each element of the RDD is a string containing a JSON object.
646+
* `jsonRDD` - loads data from an existing RDD where each element of the RDD is a string containing a JSON object.
647647

648648
{% highlight java %}
649649
// sc is an existing JavaSparkContext.
@@ -681,7 +681,7 @@ Spark SQL can automatically infer the schema of a JSON dataset and load it as a
681681
This conversion can be done using one of two methods in a SQLContext:
682682

683683
* `jsonFile` - loads data from a directory of JSON files where each line of the files is a JSON object.
684-
* `jsonRdd` - loads data from an existing RDD where each element of the RDD is a string containing a JSON object.
684+
* `jsonRDD` - loads data from an existing RDD where each element of the RDD is a string containing a JSON object.
685685

686686
{% highlight python %}
687687
# sc is an existing SparkContext.

0 commit comments

Comments
 (0)