spark-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Amit Sela <amitsel...@gmail.com>
Subject Fault-tolerant Accumulators in a DStream-only transformations.
Date Tue, 29 Nov 2016 22:42:25 GMT
Hi all,

In order to recover Accumulators (functionally) from a Driver failure, it
is recommended to use it within a foreachRDD/transform and use the RDD
context with a Singleton wrapping the Accumulator as shown in the examples
<https://github.com/apache/spark/blob/branch-1.6/examples/src/main/scala/org/apache/spark/examples/streaming/RecoverableNetworkWordCount.scala#L123>
.

I was wondering if there's a similar technique for DStream-only
transformations, such as *updateStateByKey/mapWithState* ?

Thanks,
Amit

Mime
View raw message