spark-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Chetan Khatri <chetan.opensou...@gmail.com>
Subject Re: Spark Cluster over yarn cluster monitoring
Date Tue, 29 Oct 2019 07:34:11 GMT
Thanks Jörn

On Sun, Oct 27, 2019 at 8:01 AM Jörn Franke <jornfranke@gmail.com> wrote:

> Use yarn queues:
>
>
> https://hadoop.apache.org/docs/current/hadoop-yarn/hadoop-yarn-site/FairScheduler.html
>
> Am 27.10.2019 um 06:41 schrieb Chetan Khatri <chetan.opensource@gmail.com
> >:
>
> 
> Could someone please help me to understand better..
>
> On Thu, Oct 17, 2019 at 7:41 PM Chetan Khatri <chetan.opensource@gmail.com>
> wrote:
>
>> Hi Users,
>>
>> I do submit *X* number of jobs with Airflow to Yarn as a part of
>> workflow for *Y *customer. I could potentially run workflow for customer *Z
>> *but I need to check that how much resources are available over the
>> cluster so jobs for next customer should start.
>>
>> Could you please tell what is the best way to handle this. Currently, I
>> am just checking availableMB > 100 then trigger next Airflow DAG over Yarn.
>>
>> GET http://rm-http-address:port/ws/v1/cluster/metrics
>>
>> Thanks.
>>
>>

Mime
View raw message