spark-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Gen <>
Subject Re: --executor-cores cannot change vcores in yarn?
Date Mon, 03 Nov 2014 21:34:53 GMT

Well, I doesn't find original documentation, but according to
<>  ,
the vcores is not for physics cpu core but for "virtual" cores. 
And I used top command to monitor the cpu utilization during the spark task.
The spark can use all cpu even I leave --executor-cores as default(1).

Hope that it can be a help.

Gen wrote
> Hi,
> Maybe it is a stupid question, but I am running spark on yarn. I request
> the resources by the following command:
> {code}
> ./spark-submit --master yarn-client --num-executors #number of worker
> --executor-cores #number of cores. ...
> {code}
> However, after launching the task, I use 
> yarn node -status ID 
>  to monitor the situation of cluster. It shows that the number of Vcores
> used for each container is always 1 no matter what number I pass by
> --executor-cores. 
> Any ideas how to solve this problem? Thanks a lot in advance for your
> help.
> Cheers
> Gen

View this message in context:
Sent from the Apache Spark User List mailing list archive at

To unsubscribe, e-mail:
For additional commands, e-mail:

View raw message