spark-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From 王 宇 <>
Subject is there a way to specify interval between task retry attempts ?
Date Mon, 30 Oct 2017 01:25:49 GMT
Sorry for interrupting, I have a quick question regarding the retry mechanism on failed tasks.
I like to know whether there is a way to specify the interval between task retry attempts.
I have set the spark.task.maxFailures to a relatively large number, but due to the unstable
network condition and also the fact that failed tasks are always retried very fast (at millisecond
level as I observed), my spark streaming job, which receives docs from Kafka, does a bit transformation
and then finally sends updated doc into Elasticsearch cluster, still fails quite frequently
after the maximum retry number is exhausted.

View raw message