Skip to content

Commit ed58742

Browse files
mengxrpwendell
authored andcommitted
SPARK-1117: update accumulator docs
The current doc hints spark doesn't support accumulators of type `Long`, which is wrong. JIRA: https://spark-project.atlassian.net/browse/SPARK-1117 Author: Xiangrui Meng <[email protected]> Closes apache#631 from mengxr/acc and squashes the following commits: 45ecd25 [Xiangrui Meng] update accumulator docs (cherry picked from commit aaec7d4) Signed-off-by: Patrick Wendell <[email protected]>
1 parent 84131fe commit ed58742

File tree

2 files changed

+3
-3
lines changed

2 files changed

+3
-3
lines changed

core/src/main/scala/org/apache/spark/Accumulators.scala

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -188,8 +188,8 @@ class GrowableAccumulableParam[R <% Growable[T] with TraversableOnce[T] with Ser
188188
* A simpler value of [[Accumulable]] where the result type being accumulated is the same
189189
* as the types of elements being merged, i.e. variables that are only "added" to through an
190190
* associative operation and can therefore be efficiently supported in parallel. They can be used
191-
* to implement counters (as in MapReduce) or sums. Spark natively supports accumulators of type
192-
* `Int` and `Double`, and programmers can add support for new types.
191+
* to implement counters (as in MapReduce) or sums. Spark natively supports accumulators of numeric
192+
* value types, and programmers can add support for new types.
193193
*
194194
* An accumulator is created from an initial value `v` by calling [[SparkContext#accumulator]].
195195
* Tasks running on the cluster can then add to it using the [[Accumulable#+=]] operator.

docs/scala-programming-guide.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -344,7 +344,7 @@ After the broadcast variable is created, it should be used instead of the value
344344

345345
## Accumulators
346346

347-
Accumulators are variables that are only "added" to through an associative operation and can therefore be efficiently supported in parallel. They can be used to implement counters (as in MapReduce) or sums. Spark natively supports accumulators of type Int and Double, and programmers can add support for new types.
347+
Accumulators are variables that are only "added" to through an associative operation and can therefore be efficiently supported in parallel. They can be used to implement counters (as in MapReduce) or sums. Spark natively supports accumulators of numeric value types and standard mutable collections, and programmers can add support for new types.
348348

349349
An accumulator is created from an initial value `v` by calling `SparkContext.accumulator(v)`. Tasks running on the cluster can then add to it using the `+=` operator. However, they cannot read its value. Only the driver program can read the accumulator's value, using its `value` method.
350350

0 commit comments

Comments
 (0)