spark-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Jacek Laskowski <>
Subject Does CoarseGrainedSchedulerBackend care about cores only? And disregards memory?
Date Thu, 23 Jun 2016 21:41:54 GMT

After reviewing makeOffer and launchTasks in
CoarseGrainedSchedulerBackend I came to the following conclusion:

Scheduling in Spark relies on cores only (not memory), i.e. the number
of tasks Spark can run on an executor is constrained by the number of
cores available only. When submitting Spark application for execution
both -- memory and cores -- can be specified explicitly.

Would you agree? Do I miss anything important?

I was very surprised when I found it out as I thought that memory
would also have been a limiting factor.

Jacek Laskowski
Mastering Apache Spark
Follow me at

To unsubscribe, e-mail:
For additional commands, e-mail:

View raw message