spark-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Thodoris Zois <>
Subject Spark on Mesos - Weird behavior
Date Mon, 09 Jul 2018 18:04:49 GMT
Hello list,

We are running Apache Spark on a Mesos cluster and we face a weird behavior of executors.
When we submit an app with e.g 10 cores and 2GB of memory and max cores 30, we expect to see
3 executors running on the cluster. However, sometimes there are only 2... Spark applications
are not the only one that run on the cluster. I guess that Spark starts executors on the available
offers even if it does not satisfy our needs. Is there any configuration that we can use in
order to prevent Spark from starting when there are no resource offers for the total number
of executors?

Thank you 
- Thodoris 

To unsubscribe e-mail:

View raw message