spot-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Mark Grover <m...@apache.org>
Subject Re: Number of executors and dynamic allocation
Date Tue, 04 Apr 2017 14:20:24 GMT
I am sorry I meant to say (missed the "not" in my previous email):
Nope. When *not* using dynamic allocation, it's a static number of executors
> throughout the job, no max or min. If you specify the max value config,
> it'd be either ignored or result in an error (likely just ignored -
haven't
> tested).

As I understand, min and max don't make sense in a non dynamic allocation
scenario, let me know if I am missing something though.

Yeah, shuffle service is a pre-requisite for using Dynamic allocation, and
yeah, it needs to be enabled.

On Mon, Apr 3, 2017 at 10:55 PM, Giacomo Bernardi <mino@minux.it> wrote:

> > Nope. When using dynamic allocation, it's a static number of executors
> > throughout the job, no max or min. If you specify the max value config,
> > it'd be either ignored or result in an error (likely just ignored -
> haven't
> > tested).
>
> No, you can still specify the min and max number of executors even if
> you enable dynamic allocation, by using:
> - spark.dynamicAllocation.maxExecutors (default: infinity)
> - spark.dynamicAllocation.minExecutors (default: 0)
>
> That's why I was suggesting to simply change:
>   --num-executors ${SPK_EXEC} \
> into:
>   --conf spark.dynamicAllocation.maxExecutors=${SPK_EXEC} \
>
> Also, do we need spark.shuffle.service.enabled=true, as it seems to be
> the case in the docs?
>
> Giacomo
>

Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message