spark-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Andrew Ash (JIRA)" <>
Subject [jira] [Commented] (SPARK-1882) Support dynamic memory sharing in Mesos
Date Fri, 09 Jan 2015 05:04:34 GMT


Andrew Ash commented on SPARK-1882:

Yes -- in the simplest case where you have one machine with one job running with all resources
and then a second job starts, you'd want the first job to yield half its resources and then
the two jobs both have half the memory and cores of the box.  So you'd need to change memory
size while running jobs.

That said, I'm still new to this resource management world and trying to understand if this
is actually an issue in practice or not -- the recent work of having dynamic scaling with
YARN and the separate shuffle server seems like it has the core sharing figured out, but from
my memory of watching this video I'm not sure that it has the memory sharing worked out.

> Support dynamic memory sharing in Mesos
> ---------------------------------------
>                 Key: SPARK-1882
>                 URL:
>             Project: Spark
>          Issue Type: Improvement
>          Components: Mesos
>    Affects Versions: 1.0.0
>            Reporter: Andrew Ash
> Fine grained mode Mesos currently supports sharing CPUs very well, but requires that
memory be pre-partitioned according to the executor memory parameter.  Mesos supports dynamic
memory allocation in addition to dynamic CPU allocation, so we should utilize this feature
in Spark.
> See below where when the Mesos backend accepts a resource offer it only checks that there's
enough memory to cover sc.executorMemory, and doesn't ever take a fraction of the memory available.
 The memory offer is accepted all or nothing from a pre-defined parameter.
> Coarse mode:
> Fine mode:

This message was sent by Atlassian JIRA

To unsubscribe, e-mail:
For additional commands, e-mail:

View raw message