spark-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Shekhar Bansal <>
Subject Re: --executor-cores cannot change vcores in yarn?
Date Tue, 04 Nov 2014 06:58:12 GMT
If you are using capacity scheduler in yarn: By default yarn capacity
scheduler uses DefaultResourceCalculator. DefaultResourceCalculator
considerĀ¹s only memory while allocating contains.
You can use DominantResourceCalculator, it considers memory and cpu.
In capacity-scheduler.xml set

On 04/11/14 3:03 am, "Gen" <> wrote:

>Well, I doesn't find original documentation, but according to
>the vcores is not for physics cpu core but for "virtual" cores.
>And I used top command to monitor the cpu utilization during the spark
>The spark can use all cpu even I leave --executor-cores as default(1).
>Hope that it can be a help.
>Gen wrote
>> Hi,
>> Maybe it is a stupid question, but I am running spark on yarn. I request
>> the resources by the following command:
>> {code}
>> ./spark-submit --master yarn-client --num-executors #number of worker
>> --executor-cores #number of cores. ...
>> {code}
>> However, after launching the task, I use
>> yarn node -status ID
>>  to monitor the situation of cluster. It shows that the number of Vcores
>> used for each container is always 1 no matter what number I pass by
>> --executor-cores.
>> Any ideas how to solve this problem? Thanks a lot in advance for your
>> help.
>> Cheers
>> Gen
>View this message in context:
>Sent from the Apache Spark User List mailing list archive at
>To unsubscribe, e-mail:
>For additional commands, e-mail:

To unsubscribe, e-mail:
For additional commands, e-mail:

View raw message