spark-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Evan Chan>
Subject Fair scheduler documentation
Date Fri, 06 Sep 2013 21:49:18 GMT
Are we ready to document the fair scheduler?    This section on the
standalone docs seems out of date....

# Job Scheduling

The standalone cluster mode currently only supports a simple FIFO scheduler
across jobs.
However, to allow multiple concurrent jobs, you can control the maximum
number of resources each Spark job will acquire.
By default, it will acquire *all* the cores in the cluster, which only
makes sense if you run just a single
job at a time. You can cap the number of cores using
`System.setProperty("spark.cores.max", "10")` (for example).
This value must be set *before* initializing your SparkContext.

Evan Chan
Staff Engineer  |


  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message