spark-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Sean Owen <so...@cloudera.com>
Subject Re: Spark Worker Core Allocation
Date Sun, 08 Jun 2014 19:44:45 GMT
Have a look at:

https://spark.apache.org/docs/1.0.0/job-scheduling.html
https://spark.apache.org/docs/1.0.0/spark-standalone.html

The default is to grab resource on all nodes. In your case you could set
spark.cores.max to 2 or less to enable running two apps on a cluster of
4-core machines simultaneously.

See also spark.deploy.defaultCores

But you may really be after spark.deploy.spreadOut. if you make it false it
will instead try to take all resource from a few nodes.
 On Jun 8, 2014 1:55 AM, "Subacini B" <subacini@gmail.com> wrote:

> Hi All,
>
> My cluster has 5 workers each having 4 cores (So total 20 cores).It is  in
> stand alone mode (not using Mesos or Yarn).I want two programs to run at
> same time. So I have configured "spark.cores.max=3" , but when i run the
> program it allocates three cores taking one core from each worker making 3
> workers to run the program ,
>
> How to configure such that it takes 3 cores from 1 worker so that i can
> use other workers for second program.
>
> Thanks in advance
> Subacini
>

Mime
View raw message