got the below exception in logs:

org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.yarn.exceptions.InvalidResourceRequestException): Invalid resource request, requested virtual cores < 0, or requested virtual cores > max configured, requestedVirtualCores=5, maxVirtualCores=4
at org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerUtils.validateResourceRequest(
at org.apache.hadoop.yarn.server.resourcemanager.RMServerUtils.validateResourceRequests(
at org.apache.hadoop.yarn.server.resourcemanager.ApplicationMasterService.allocate(
at org.apache.hadoop.yarn.api.impl.pb.service.ApplicationMasterProtocolPBServiceImpl.allocate(
at org.apache.hadoop.yarn.proto.ApplicationMasterProtocol$ApplicationMasterProtocolService$2.callBlockingMethod(

My understanding was --executor-cores(5 here) are maximum concurrent tasks possible in an executor and --num-executors (10 here)are no of executors or containers demanded by Application master/Spark driver program  to yarn RM.

So these --num-executors (5) are parallel tasks and why is it controlled by yarn . Why not by JVM executor which was started by name node . Why can't these 10 JVM executors started by respective Name  nodes as containers  allocate requested tasks or threads > vcores configured on node.

Is there a way I can control maxVirtualCores for an application and be it very large than no of cores. Say 100 for a 8 core system. Since in a JVM process( CPU+IO+Network intensive processing)100 threads are very less ?

On Tue, Jul 14, 2015 at 10:52 PM, Marcelo Vanzin <> wrote:

On Tue, Jul 14, 2015 at 9:57 AM, Shushant Arora <> wrote:
When I specify --executor-cores > 4 it fails to start the application.
When I give --executor-cores as 4 , it works fine.

Do you have any NM that advertises more than 4 available cores?

Also, it's always worth it to check if there's anything interesting in the logs; see the "yarn logs" command, and also the RM/NM logs.