spark-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Xiaoye Sun <>
Subject Multiple vcores per container when running Spark applications in Yarn cluster mode
Date Fri, 08 Sep 2017 22:54:39 GMT

I am using Spark 1.6.1 and Yarn 2.7.4.
I want to submit a Spark application to a Yarn cluster. However, I found
that the number of vcores assigned to a container/executor is always 1,
even if I set spark.executor.cores=2. I also found the number of tasks an
executor runs concurrently is 2. So, it seems that Spark knows that an
executor/container has two CPU cores but the request is not correctly sent
to Yarn resource scheduler. I am using
the org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler
on Yarn.

I am wondering that is it possible to assign multiple vcores to a container
when a Spark job is submitted to a Yarn cluster in yarn-cluster mode.


View raw message