spark-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Evan Chan ...@ooyala.com>
Subject Fair scheduler documentation
Date Fri, 06 Sep 2013 21:49:18 GMT
Are we ready to document the fair scheduler?    This section on the
standalone docs seems out of date....

# Job Scheduling

The standalone cluster mode currently only supports a simple FIFO scheduler
across jobs.
However, to allow multiple concurrent jobs, you can control the maximum
number of resources each Spark job will acquire.
By default, it will acquire *all* the cores in the cluster, which only
makes sense if you run just a single
job at a time. You can cap the number of cores using
`System.setProperty("spark.cores.max", "10")` (for example).
This value must be set *before* initializing your SparkContext.


-- 
--
Evan Chan
Staff Engineer
ev@ooyala.com  |

<http://www.ooyala.com/>
<http://www.facebook.com/ooyala><http://www.linkedin.com/company/ooyala><http://www.twitter.com/ooyala>

Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message