spark-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Teng Qiu <teng...@gmail.com>
Subject Re: standalone mode only supports FIFO scheduler across applications ? still in spark 2.0 time ?
Date Sat, 16 Jul 2016 18:11:53 GMT
Hi Mark, thanks, we just want to keep our system as simple as
possible, using YARN means we need to maintain a full-size hadoop
cluster, we are using s3 as storage layer, so HDFS is not needed, a
hadoop cluster is a little bit overkill, mesos is an option, but
still, it brings extra operation costs.

So... any suggestion from you?

Thanks


2016-07-15 18:51 GMT+02:00 Mark Hamstra <mark@clearstorydata.com>:
> Nothing has changed in that regard, nor is there likely to be "progress",
> since more sophisticated or capable resource scheduling at the Application
> level is really beyond the design goals for standalone mode.  If you want
> more in the way of multi-Application resource scheduling, then you should be
> looking at Yarn or Mesos.  Is there some reason why neither of those options
> can work for you?
>
> On Fri, Jul 15, 2016 at 9:15 AM, Teng Qiu <tengqiu@gmail.com> wrote:
>>
>> Hi,
>>
>>
>> http://people.apache.org/~pwendell/spark-nightly/spark-master-docs/latest/spark-standalone.html#resource-scheduling
>> The standalone cluster mode currently only supports a simple FIFO
>> scheduler across applications.
>>
>> is this sentence still true? any progress on this? it will really
>> helpful. some roadmap?
>>
>> Thanks
>>
>> Teng
>>
>> ---------------------------------------------------------------------
>> To unsubscribe e-mail: user-unsubscribe@spark.apache.org
>>
>

---------------------------------------------------------------------
To unsubscribe e-mail: user-unsubscribe@spark.apache.org


Mime
View raw message