spark-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Timothy Chen <tnac...@gmail.com>
Subject Re: Retry option and range resource configuration for Spark job on Mesos
Date Fri, 06 Jul 2018 23:00:21 GMT
Hi Tien,

There is no retry on the job level as we expect the user to retry, and as
you mention we tolerate tasks retry already.

There is no request/limit type resource configuration that you described in
Mesos (yet).

So for 2) that’s not possible at the moment.

Tim


On Fri, Jul 6, 2018 at 11:42 PM Tien Dat <tphan.dat@gmail.com> wrote:

> Dear all,
>
> We are running Spark with Mesos as the resource manager. We are interesting
> in some aspect, such as:
>
> 1, Is it possible to configure a specific job with a number of maximum
> retries?
> I meant here is the retry at job level, NOT the /spark.task.maxFailures/
> which is for the task with a job.
>
> 2, Is it possible to set a job with a range of resource, such as: at least
> 20 CPU cores, at most 30 CPU cores and at least 20GB of mem, at most 40GB?
>
> Thank you in advance.
>
> Best
> Tien Dat
>
>
>
> --
> Sent from: http://apache-spark-user-list.1001560.n3.nabble.com/
>
> ---------------------------------------------------------------------
> To unsubscribe e-mail: user-unsubscribe@spark.apache.org
>
>

Mime
View raw message