I think, you did try to run spark jobs on yarn. And also I think, you would like to submit jobs to different queues on yarn for each. And maybe you need to prepare queues on yarn to run spark jobs by configuring scheduler.
--------- Original Message ---------
Sender : Divya Gehlot <firstname.lastname@example.org>
Date : 2016-09-14 15:08 (GMT+9)
Title : how to specify cores and executor to run spark jobs simultaneously
I am on EMR cluster and My cluster configuration is as below:
Number of nodes including master node - 3
VCores Total : 16
Active Nodes : 2
Spark version- 1.6.1
Parameter set in spark-default.conf
Would let me know if need any other info regarding the cluster .
The current configuration for spark-submit is
--driver-memory 5G \
--executor-memory 2G \
--executor-cores 5 \
--num-executors 10 \
Currently with the above job configuration if I try to run another spark job it will be in accepted state till the first one finishes .
How do I optimize or update the above spark-submit configurations to run some more spark jobs simultaneously
Would really appreciate the help.