got the below exception in logs:
org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.yarn.exceptions.InvalidResourceRequestException): Invalid resource request, requested virtual cores < 0, or requested virtual cores > max configured, requestedVirtualCores=5, maxVirtualCores=4
at org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerUtils.validateResourceRequest(SchedulerUtils.java:205)
at org.apache.hadoop.yarn.server.resourcemanager.RMServerUtils.validateResourceRequests(RMServerUtils.java:94)
at org.apache.hadoop.yarn.server.resourcemanager.ApplicationMasterService.allocate(ApplicationMasterService.java:487)
at org.apache.hadoop.yarn.api.impl.pb.service.ApplicationMasterProtocolPBServiceImpl.allocate(ApplicationMasterProtocolPBServiceImpl.java:60)
at org.apache.hadoop.yarn.proto.ApplicationMasterProtocol$ApplicationMasterProtocolService$2.callBlockingMethod(ApplicationMasterProtocol.java:99)
-----------------------------------------------------------------------------------------------------------------------------------
My understanding was --executor-cores(5 here) are maximum concurrent tasks possible in an executor and --num-executors (10 here)are no of executors or containers demanded by Application master/Spark driver program to yarn RM.
So these --num-executors (5) are parallel tasks and why is it controlled by yarn . Why not by JVM executor which was started by name node . Why can't these 10 JVM executors started by respective Name nodes as containers allocate requested tasks or threads > vcores configured on node.
Is there a way I can control maxVirtualCores for an application and be it very large than no of cores. Say 100 for a 8 core system. Since in a JVM process( CPU+IO+Network intensive processing)100 threads are very less ?