spark-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Matei Zaharia <matei.zaha...@gmail.com>
Subject Re: Is there any plan to develop an application level fair scheduler?
Date Wed, 15 Jan 2014 04:10:08 GMT
This is true for now, we didn’t want to replicate those systems. But it may change if we
see demand for fair scheduling in our standalone cluster manager.

Matei

On Jan 14, 2014, at 6:32 PM, Xia, Junluan <junluan.xia@intel.com> wrote:

> Yes, Spark depends on Yarn or Mesos for application level scheduling.
> 
> -----Original Message-----
> From: Nan Zhu [mailto:zhunanmcgill@gmail.com] 
> Sent: Tuesday, January 14, 2014 9:43 PM
> To: dev@spark.incubator.apache.org
> Subject: Re: Is there any plan to develop an application level fair scheduler?
> 
> Hi, Junluan,   
> 
> Thank you for the reply  
> 
> but for the long-term plan, Spark will depend on Yarn and Mesos for application level
scheduling in the coming versions?
> 
> Best,  
> 
> --  
> Nan Zhu
> 
> 
> On Tuesday, January 14, 2014 at 12:56 AM, Xia, Junluan wrote:
> 
>> Are you sure that you must deploy spark in standalone mode?(it currently only support
FIFO)
>> 
>> If you could setup Spark on Yarn or Mesos, then it has supported Fair scheduler in
application level.
>> 
>> -----Original Message-----
>> From: Nan Zhu [mailto:zhunanmcgill@gmail.com]  
>> Sent: Tuesday, January 14, 2014 10:13 AM
>> To: dev@spark.incubator.apache.org (mailto:dev@spark.incubator.apache.org)
>> Subject: Is there any plan to develop an application level fair scheduler?
>> 
>> Hi, All  
>> 
>> Is there any plan to develop an application level fair scheduler?
>> 
>> I think it will have more value than a fair scheduler within the application (actually
I didn’t understand why we want to fairly share the resource among jobs within the application,
in usual, users submit different applications, not jobs)…
>> 
>> Best,  
>> 
>> --  
>> Nan Zhu
>> 
>> 
> 
> 


Mime
View raw message