spark-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Shushant Arora <shushantaror...@gmail.com>
Subject Re: spark on yarn
Date Sat, 21 May 2016 14:16:38 GMT
3.And is the same behavior applied to streaming application also?

On Sat, May 21, 2016 at 7:44 PM, Shushant Arora <shushantarora09@gmail.com>
wrote:

> And will it allocate rest executors when other containers get freed which
> were occupied by other hadoop jobs/spark applications?
>
> And is there any minimum (% of executors demanded vs available) executors
> it wait for to be freed or just start with even 1 .
>
> Thanks!
>
> On Thu, Apr 21, 2016 at 8:39 PM, Steve Loughran <stevel@hortonworks.com>
> wrote:
>
>> If there isn't enough space in your cluster for all the executors you
>> asked for to be created, Spark will only get the ones which can be
>> allocated. It will start work without waiting for the others to arrive.
>>
>> Make sure you ask for enough memory: YARN is a lot more unforgiving about
>> memory use than it is about CPU
>>
>> > On 20 Apr 2016, at 16:21, Shushant Arora <shushantarora09@gmail.com>
>> wrote:
>> >
>> > I am running a spark application on yarn cluster.
>> >
>> > say I have available vcors in cluster as 100.And I start spark
>> application with --num-executors 200 --num-cores 2 (so I need total
>> 200*2=400 vcores) but in my cluster only 100 are available.
>> >
>> > What will happen ? Will the job abort or it will be submitted
>> successfully and 100 vcores will be aallocated to 50 executors and rest
>> executors will be started as soon as vcores are available ?
>> >
>> > Please note dynamic allocation is not enabled in cluster. I have old
>> version 1.2.
>> >
>> > Thanks
>> >
>>
>>
>

Mime
View raw message