spark-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Steve Loughran <>
Subject Re: spark on yarn
Date Thu, 26 May 2016 09:36:46 GMT

> On 21 May 2016, at 15:14, Shushant Arora <> wrote:
> And will it allocate rest executors when other containers get freed which were occupied
by other hadoop jobs/spark applications?

requests will go into the queue(s), they'll stay outstanding until things free up *or more
machines join the cluster*. Whoever is in the higher priority queue gets that free capacity

you can also play with pre-emption, in which low priority work can get killed without warking

> And is there any minimum (% of executors demanded vs available) executors it wait for
to be freed or just start with even 1 .

that's called "gang scheduling", and no, it's not in YARN. Tricky one as it can complicate
allocation and can result in either things never getting scheduled or >1 app having incompletely
allocated containers and, while the capacity is enough for one app, if the resources are assigned
over both, neither can start.

look at YARN-896 to see the big todo list for services

To unsubscribe, e-mail:
For additional commands, e-mail:

View raw message