spark-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Patrick Wendell <pwend...@gmail.com>
Subject Re: Fair scheduler documentation
Date Fri, 06 Sep 2013 22:19:59 GMT
Matei mentioned to me that he was going to write docs for this. Matei,
is that still your intention?

- Patrick

On Fri, Sep 6, 2013 at 2:49 PM, Evan Chan <ev@ooyala.com> wrote:
> Are we ready to document the fair scheduler?    This section on the
> standalone docs seems out of date....
>
> # Job Scheduling
>
> The standalone cluster mode currently only supports a simple FIFO scheduler
> across jobs.
> However, to allow multiple concurrent jobs, you can control the maximum
> number of resources each Spark job will acquire.
> By default, it will acquire *all* the cores in the cluster, which only
> makes sense if you run just a single
> job at a time. You can cap the number of cores using
> `System.setProperty("spark.cores.max", "10")` (for example).
> This value must be set *before* initializing your SparkContext.
>
>
> --
> --
> Evan Chan
> Staff Engineer
> ev@ooyala.com  |
>
> <http://www.ooyala.com/>
> <http://www.facebook.com/ooyala><http://www.linkedin.com/company/ooyala><http://www.twitter.com/ooyala>

Mime
View raw message