File tree Expand file tree Collapse file tree 2 files changed +4
-4
lines changed Expand file tree Collapse file tree 2 files changed +4
-4
lines changed Original file line number Diff line number Diff line change @@ -68,7 +68,7 @@ val sc: SparkContext = ...
68
68
val documents: RDD[ Seq[ String]] = sc.textFile("...").map(_ .split(" ").toSeq)
69
69
70
70
val hashingTF = new HashingTF()
71
- val tf: RDD[ Vector] = hasingTF .transform(documents)
71
+ val tf: RDD[ Vector] = hashingTF .transform(documents)
72
72
{% endhighlight %}
73
73
74
74
While applying ` HashingTF ` only needs a single pass to the data, applying ` IDF ` needs two passes:
Original file line number Diff line number Diff line change @@ -605,7 +605,7 @@ Spark SQL can automatically infer the schema of a JSON dataset and load it as a
605
605
This conversion can be done using one of two methods in a SQLContext:
606
606
607
607
* ` jsonFile ` - loads data from a directory of JSON files where each line of the files is a JSON object.
608
- * ` jsonRdd ` - loads data from an existing RDD where each element of the RDD is a string containing a JSON object.
608
+ * ` jsonRDD ` - loads data from an existing RDD where each element of the RDD is a string containing a JSON object.
609
609
610
610
{% highlight scala %}
611
611
// sc is an existing SparkContext.
@@ -643,7 +643,7 @@ Spark SQL can automatically infer the schema of a JSON dataset and load it as a
643
643
This conversion can be done using one of two methods in a JavaSQLContext :
644
644
645
645
* ` jsonFile ` - loads data from a directory of JSON files where each line of the files is a JSON object.
646
- * ` jsonRdd ` - loads data from an existing RDD where each element of the RDD is a string containing a JSON object.
646
+ * ` jsonRDD ` - loads data from an existing RDD where each element of the RDD is a string containing a JSON object.
647
647
648
648
{% highlight java %}
649
649
// sc is an existing JavaSparkContext.
@@ -681,7 +681,7 @@ Spark SQL can automatically infer the schema of a JSON dataset and load it as a
681
681
This conversion can be done using one of two methods in a SQLContext:
682
682
683
683
* ` jsonFile ` - loads data from a directory of JSON files where each line of the files is a JSON object.
684
- * ` jsonRdd ` - loads data from an existing RDD where each element of the RDD is a string containing a JSON object.
684
+ * ` jsonRDD ` - loads data from an existing RDD where each element of the RDD is a string containing a JSON object.
685
685
686
686
{% highlight python %}
687
687
# sc is an existing SparkContext.
You can’t perform that action at this time.
0 commit comments