Skip to content
This repository was archived by the owner on May 9, 2024. It is now read-only.

Commit 32fa611

Browse files
daisukebesrowen
authored andcommitted
[SPARK-7704] Updating Programming Guides per SPARK-4397
The change per SPARK-4397 makes implicit objects in SparkContext to be found by the compiler automatically. So that we don't need to import the o.a.s.SparkContext._ explicitly any more and can remove some statements around the "implicit conversions" from the latest Programming Guides (1.3.0 and higher) Author: Dice <[email protected]> Closes apache#6234 from daisukebe/patch-1 and squashes the following commits: b77ecd9 [Dice] fix a typo 45dfcd3 [Dice] rewording per Sean's advice a094bcf [Dice] Adding a note for users on any previous releases a29be5f [Dice] Updating Programming Guides per SPARK-4397
1 parent 6845cb2 commit 32fa611

File tree

1 file changed

+5
-6
lines changed

1 file changed

+5
-6
lines changed

docs/programming-guide.md

Lines changed: 5 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -41,14 +41,15 @@ In addition, if you wish to access an HDFS cluster, you need to add a dependency
4141
artifactId = hadoop-client
4242
version = <your-hdfs-version>
4343

44-
Finally, you need to import some Spark classes and implicit conversions into your program. Add the following lines:
44+
Finally, you need to import some Spark classes into your program. Add the following lines:
4545

4646
{% highlight scala %}
4747
import org.apache.spark.SparkContext
48-
import org.apache.spark.SparkContext._
4948
import org.apache.spark.SparkConf
5049
{% endhighlight %}
5150

51+
(Before Spark 1.3.0, you need to explicitly `import org.apache.spark.SparkContext._` to enable essential implicit conversions.)
52+
5253
</div>
5354

5455
<div data-lang="java" markdown="1">
@@ -821,11 +822,9 @@ by a key.
821822

822823
In Scala, these operations are automatically available on RDDs containing
823824
[Tuple2](http://www.scala-lang.org/api/{{site.SCALA_VERSION}}/index.html#scala.Tuple2) objects
824-
(the built-in tuples in the language, created by simply writing `(a, b)`), as long as you
825-
import `org.apache.spark.SparkContext._` in your program to enable Spark's implicit
826-
conversions. The key-value pair operations are available in the
825+
(the built-in tuples in the language, created by simply writing `(a, b)`). The key-value pair operations are available in the
827826
[PairRDDFunctions](api/scala/index.html#org.apache.spark.rdd.PairRDDFunctions) class,
828-
which automatically wraps around an RDD of tuples if you import the conversions.
827+
which automatically wraps around an RDD of tuples.
829828

830829
For example, the following code uses the `reduceByKey` operation on key-value pairs to count how
831830
many times each line of text occurs in a file:

0 commit comments

Comments
 (0)