spark-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From nsalian <>
Subject Re: a question about --executor-cores
Date Tue, 03 May 2016 19:17:17 GMT

Thank you for posting the question.
To begin I do have a few questions.
1) What is size of the YARN installation? How many NodeManagers? 

2) Notes to Remember:
Container Virtual CPU Cores
>> Number of virtual CPU cores that can be allocated for containers.

Container Virtual CPU Cores Maximum
>>  The largest number of virtual CPU cores that can be requested for a
>> container.

For executor-cores:
Every Spark executor in an application has the same fixed number of cores
and same fixed heap size. The number of cores can be specified with the
--executor-cores flag when invoking spark-submit, spark-shell, and pyspark
from the command line, or by setting the spark.executor.cores property in
the spark-defaults.conf file or on a SparkConf object. 

Similarly, the heap size can be controlled with the --executor-memory flag
or the spark.executor.memory property. The cores property controls the
number of concurrent tasks an executor can run. --executor-cores 5 means
that each executor can run a maximum of five tasks at the same time. The
memory property impacts the amount of data Spark can cache, as well as the
maximum sizes of the shuffle data structures used for grouping,
aggregations, and joins.

Imagine a cluster with six nodes running NodeManagers, each equipped with 16
cores and 64GB of memory. The NodeManager capacities,
yarn.nodemanager.resource.memory-mb and
yarn.nodemanager.resource.cpu-vcores, should probably be set to 63 * 1024 =
64512 (megabytes) and 15 respectively. We avoid allocating 100% of the
resources to YARN containers because the node needs some resources to run
the OS and Hadoop daemons. In this case, we leave a gigabyte and a core for
these system processes. Cloudera Manager helps by accounting for these and
configuring these YARN properties automatically.

The likely first impulse would be to use --num-executors 6 --executor-cores
15 --executor-memory 63G. However, this is the wrong approach because:

63GB + the executor memory overhead won’t fit within the 63GB capacity of
the NodeManagers.
The application master will take up a core on one of the nodes, meaning that
there won’t be room for a 15-core executor on that node.
15 cores per executor can lead to bad HDFS I/O throughput.
A better option would be to use --num-executors 17 --executor-cores 5
--executor-memory 19G. Why?

This config results in three executors on all nodes except for the one with
the AM, which will have two executors.
--executor-memory was derived as (63/3 executors per node) = 21.  21 * 0.07
= 1.47.  21 – 1.47 ~ 19.

This is covered here:

Neelesh S. Salian
View this message in context:
Sent from the Apache Spark User List mailing list archive at

To unsubscribe, e-mail:
For additional commands, e-mail:

View raw message