spark-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Chawla,Sumit " <sumitkcha...@gmail.com>
Subject What is correct behavior for spark.task.maxFailures?
Date Fri, 21 Apr 2017 20:32:26 GMT
I am seeing a strange issue. I had a bad behaving slave that failed the
entire job.  I have set spark.task.maxFailures to 8 for my job.  Seems like
all task retries happen on the same slave in case of failure.  My
expectation was that task will be retried on different slave in case of
failure, and chance of all 8 retries to happen on same slave is very less.


Regards
Sumit Chawla

Mime
View raw message