Hello Ji, 

Spark will launch Executors round-robin on offers, so when the resources on an agent get broken into multiple resource offers it's possible that many Executrors get placed on a single agent. However, from your description, it's not clear why your other agents do not get Executors scheduled on them. It's possible that the offers from your other agents are insufficient in some way. The Mesos MASTER log should show offers being declined by your Spark Driver, do you see that?  If you have DEBUG level logging in your Spark driver you should also see offers being declined there. Finally if your Spark framework isn't receiving any resource offers, it could be because of the roles you have established on your agents or quota set on other frameworks, have you set up any of that? Hope this helps!


On Tue, Dec 5, 2017 at 10:45 PM, Ji Yan <jiyan@drive.ai> wrote:
Hi all,

I am running Spark 2.0 on Mesos 1.1. I was trying to split up my job onto several nodes. I try to set the number of executors by the formula (spark.cores.max / spark.executor.cores). The behavior I saw was that Spark will try to fill up on one mesos node as many executors as it can, then it stops going to other mesos nodes despite that it has not done scheduling all the executors I have asked it to yet! This is super weird!

Did anyone notice this behavior before? Any help appreciated!


The information in this email is confidential and may be legally privileged. It is intended solely for the addressee. Access to this email by anyone else is unauthorized. If you are not the intended recipient, any disclosure, copying, distribution or any action taken or omitted to be taken in reliance on it, is prohibited and may be unlawful.