spark-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Deborah Siegel <deborah.sie...@gmail.com>
Subject Re: Number of cores per executor on Spark Standalone
Date Sun, 01 Mar 2015 09:58:56 GMT
Hi,

Someone else will have a better answer. I think that for standalone mode,
executors will grab whatever cores they can based on either configurations
on the worker, or application specific configurations. Could be wrong, but
I believe mesos is similar to this- and that YARN is alone in the ability
to specify a specific number of cores given to each executor.

For Standalone Mode, configurations on the workers can limit the number of
cores available on themselves, and applications can limit the number of
cores they will grab across the entire cluster.

1) environmental property on each worker -SPARK_WORKER_CORES, or set this
as --cores as you manually start each worker. This will effect how many
cores are available on the worker for all applications.
2) environmental property on each worker - spark.deploy.defaultCores, which
limits the number of cores any single application can grab from the worker
in the case that the application has not set total.maximum.cores  (or
-total-executor-cores as a flag to spark-submit). If the application has
not set total.maximum.cores, and the worker does not have
spark.deploy.defaultCores set, the application can grab infinite cores on
the node. Could be an issue for a shared cluster.

Sincerely,
Deb









On Fri, Feb 27, 2015 at 11:13 PM, bit1129@163.com <bit1129@163.com> wrote:

> Hi ,
>
> I know that spark on yarn has a configuration parameter(executor-cores
> NUM) to  specify the number of cores per executor.
> How about spark standalone? I can specify the total cores, but how could I
> know how many cores each executor will take(presume one node one
> executor)?
>
>
> ------------------------------
> bit1129@163.com
>

Mime
View raw message