spark-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Saisai Shao <sai.sai.s...@gmail.com>
Subject Re: Spark on yarn, only 1 or 2 vcores getting allocated to the containers getting created.
Date Wed, 03 Aug 2016 07:53:29 GMT
Use dominant resource calculator instead of default resource calculator
will get the expected vcores as you wanted. Basically by default yarn does
not honor cpu cores as resource, so you will always see vcore is 1 no
matter what number of cores you set in spark.

On Wed, Aug 3, 2016 at 12:11 PM, satyajit vegesna <
satyajit.apasprk@gmail.com> wrote:

> Hi All,
>
> I am trying to run a spark job using yarn, and i specify --executor-cores
> value as 20.
> But when i go check the "nodes of the cluster" page in
> http://hostname:8088/cluster/nodes then i see 4 containers getting
> created on each of the node in cluster.
>
> But can only see 1 vcore getting assigned for each containier, even when i
> specify --executor-cores 20 while submitting job using spark-submit.
>
> yarn-site.xml
> <property>
>         <name>yarn.scheduler.maximum-allocation-mb</name>
>         <value>60000</value>
> </property>
> <property>
>         <name>yarn.scheduler.minimum-allocation-vcores</name>
>         <value>1</value>
> </property>
> <property>
>         <name>yarn.scheduler.maximum-allocation-vcores</name>
>         <value>40</value>
> </property>
> <property>
>         <name>yarn.nodemanager.resource.memory-mb</name>
>         <value>70000</value>
> </property>
> <property>
>         <name>yarn.nodemanager.resource.cpu-vcores</name>
>         <value>20</value>
> </property>
>
>
> Did anyone face the same issue??
>
> Regards,
> Satyajit.
>

Mime
View raw message