spark-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Grega Kešpret <gr...@celtra.com>
Subject Re: spark.task.maxFailures
Date Mon, 16 Dec 2013 22:12:39 GMT
Any news regarding this setting? Is this expected behaviour? Is there some
other way I can have Spark fail-fast?

Thanks!

On Mon, Dec 9, 2013 at 4:35 PM, Grega Kešpret <grega@celtra.com> wrote:

> Hi!
>
> I tried this (by setting spark.task.maxFailures to 1) and it still does
> not fail-fast. I started a job and after some time, I killed all JVMs
> running on one of the two workers. I was expecting Spark job to fail,
> however it re-fetched tasks to one of the two workers that was still alive
> and the job succeeded.
>
> Grega
>

Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message