spark-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "t oo (Jira)" <j...@apache.org>
Subject [jira] [Commented] (SPARK-27750) Standalone scheduler - ability to prioritize applications over drivers, many drivers act like Denial of Service
Date Thu, 05 Dec 2019 23:30:00 GMT

    [ https://issues.apache.org/jira/browse/SPARK-27750?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16989249#comment-16989249
] 

t oo commented on SPARK-27750:
------------------------------

bump

 

> Standalone scheduler - ability to prioritize applications over drivers, many drivers
act like Denial of Service
> ---------------------------------------------------------------------------------------------------------------
>
>                 Key: SPARK-27750
>                 URL: https://issues.apache.org/jira/browse/SPARK-27750
>             Project: Spark
>          Issue Type: New Feature
>          Components: Scheduler
>    Affects Versions: 3.0.0
>            Reporter: t oo
>            Priority: Minor
>
> If I submit 1000 spark submit drivers then they consume all the cores on my cluster (essentially
it acts like a Denial of Service) and no spark 'application' gets to run since the cores
are all consumed by the 'drivers'. This feature is about having the ability to prioritize
applications over drivers so that at least some 'applications' can start running. I guess
it would be like: If (driver.state = 'submitted' and (exists some app.state = 'submitted'))
then set app.state = 'running'
> if all apps have app.state = 'running' then set driver.state = 'submitted' 
>  
> Secondary to this, why must a driver consume a minimum of 1 entire core?



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org


Mime
View raw message