spark-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Hyukjin Kwon (Jira)" <>
Subject [jira] [Resolved] (SPARK-20869) Master should clear failed apps when worker down
Date Tue, 08 Oct 2019 05:42:12 GMT


Hyukjin Kwon resolved SPARK-20869.
    Resolution: Incomplete

> Master should clear failed apps when worker down
> ------------------------------------------------
>                 Key: SPARK-20869
>                 URL:
>             Project: Spark
>          Issue Type: Improvement
>          Components: Spark Core
>    Affects Versions: 2.3.0
>            Reporter: Li Yichao
>            Priority: Minor
>              Labels: bulk-closed
>   Original Estimate: 2h
>  Remaining Estimate: 2h
> In `Master.removeWorker`, master clears executor and driver state, but does not clear
app state. App state is cleared when received `UnregisterApplication` and when `onDisconnect`,
the first is when driver shutdown gracefully, the second is called when `netty`'s `channelInActive`
is called (which is called when channel is closed), both of which can not handle the case
when there is a network partition between master and worker.
> Follow the steps in [SPARK-19900|],
and see the [screenshots|]
when worker1 partitions with master, the app `app-xxx-000` is still running instead of finished
because of worker1 is down.
> cc [~CodingCat]

This message was sent by Atlassian Jira

To unsubscribe, e-mail:
For additional commands, e-mail:

View raw message