spark-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Serega Sheypak <>
Subject Spark 2.x duplicates output when task fails at "repartition" stage. Checkpointing is enabled before repartition.
Date Tue, 05 Feb 2019 18:15:20 GMT
Hi, I have spark job that produces duplicates when one or tasks from
repartition stage fails.
Here is simplified code.


*val *inputRDDs: List[RDD[String]] = *List*.*empty *// an RDD per input dir

*val *updatedRDDs ={ inputRDD => // some stuff happens here





*val *unionOfUpdatedRDDs = sparkContext.union(updatedRDDs)

unionOfUpdatedRDDs.checkpoint() // id didn't help


  .repartition(42) // task failed here,

  .saveAsNewAPIHadoopFile("/path") // task failed here too.

// what really causes duplicates in output?

View raw message