spark-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Patrick Wendell (JIRA)" <>
Subject [jira] [Updated] (SPARK-3174) Provide elastic scaling within a Spark application
Date Mon, 06 Oct 2014 04:32:33 GMT


Patrick Wendell updated SPARK-3174:
    Component/s: YARN
                 Spark Core

> Provide elastic scaling within a Spark application
> --------------------------------------------------
>                 Key: SPARK-3174
>                 URL:
>             Project: Spark
>          Issue Type: Improvement
>          Components: Spark Core, YARN
>    Affects Versions: 1.0.2
>            Reporter: Sandy Ryza
>            Assignee: Andrew Or
>         Attachments: SPARK-3174design.pdf
> A common complaint with Spark in a multi-tenant environment is that applications have
a fixed allocation that doesn't grow and shrink with their resource needs.  We're blocked
on YARN-1197 for dynamically changing the resources within executors, but we can still allocate
and discard whole executors.
> I think it would be useful to have some heuristics that
> * Request more executors when many pending tasks are building up
> * Request more executors when RDDs can't fit in memory
> * Discard executors when few tasks are running / pending and there's not much in memory
> Bonus points: migrate blocks from executors we're about to discard to executors with
free space.

This message was sent by Atlassian JIRA

To unsubscribe, e-mail:
For additional commands, e-mail:

View raw message