spark-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From yncxcw <>
Subject Re: Spark EMR executor-core vs Vcores
Date Mon, 26 Feb 2018 23:05:00 GMT
hi, all

I also noticed this problem. The reason is that Yarn accounts each executor
for only 1, no matter how many cores you configured. 
Because Yarn only uses memory as the primary metrics for resource
allocation. It means that Yarn will pack as many as executors on each node
as long as the node has 
free memory space.

If you want to enable vcores to be accounted for resource allocation, you
can configure the resource calculator as DominantResoruceCalculator, as

Property	Description
yarn.scheduler.capacity.resource-calculator	The ResourceCalculator
implementation to be used to compare Resources in the scheduler. The default
i.e. org.apache.hadoop.yarn.util.resource.DefaultResourseCalculator only
uses Memory while DominantResourceCalculator uses Dominant-resource to
compare multi-dimensional resources such as Memory, CPU etc. A Java
ResourceCalculator class name is expected.

Please also refer this article:


Wei Chen

Sent from:

To unsubscribe e-mail:

View raw message