spark-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Andrew Or <and...@databricks.com>
Subject Re: Why standalone mode don't allow to set num-executor ?
Date Tue, 18 Aug 2015 20:25:36 GMT
Hi Canan,

This is mainly for legacy reasons. The default behavior in standalone in
mode is that the application grabs all available resources in the cluster.
This effectively means we want one executor per worker, where each executor
grabs all the available cores and memory on that worker. In this model, it
doesn't really make sense to express number of executors, because that's
equivalent to the number of workers.

In 1.4+, however, we do support multiple executors per worker, but that's
not the default so we decided not to add support for the --num-executors
setting to avoid potential confusion.

-Andrew


2015-08-18 2:35 GMT-07:00 canan chen <ccnfdu@gmail.com>:

> num-executor only works for yarn mode. In standalone mode, I have to set
> the --total-executor-cores and --executor-cores. Isn't this way so
> intuitive ? Any reason for that ?
>

Mime
View raw message