spark-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Tao Li <>
Subject Can we allow executor to exit when tasks fail too many time?
Date Mon, 06 Jul 2015 04:25:20 GMT
I have a long live spark application running on YARN.

In some nodes, it try to write to the shuffle path in the shuffle map task.
But the root path /search/hadoop10/yarn_local/usercache/spark/ was deleted,
so the task is failed. So every time when running shuffle map task on this
node, it was always failed due to the root path not existed.

I want to know if can set the executor max task failed num? If the task
failed num exceed the threshold, we can let the exectuor offline and offer
a new executor by driver?

shuffle path :

View raw message