spark-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Jacek Laskowski <ja...@japila.pl>
Subject Does CoarseGrainedSchedulerBackend care about cores only? And disregards memory?
Date Thu, 23 Jun 2016 21:41:54 GMT
Hi,

After reviewing makeOffer and launchTasks in
CoarseGrainedSchedulerBackend I came to the following conclusion:

Scheduling in Spark relies on cores only (not memory), i.e. the number
of tasks Spark can run on an executor is constrained by the number of
cores available only. When submitting Spark application for execution
both -- memory and cores -- can be specified explicitly.

Would you agree? Do I miss anything important?

I was very surprised when I found it out as I thought that memory
would also have been a limiting factor.

Pozdrawiam,
Jacek Laskowski
----
https://medium.com/@jaceklaskowski/
Mastering Apache Spark http://bit.ly/mastering-apache-spark
Follow me at https://twitter.com/jaceklaskowski

---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@spark.apache.org
For additional commands, e-mail: dev-help@spark.apache.org


Mime
View raw message