spark-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Aaron Davidson (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (SPARK-1860) Standalone Worker cleanup should not clean up running executors
Date Wed, 01 Oct 2014 02:06:33 GMT

    [ https://issues.apache.org/jira/browse/SPARK-1860?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14154228#comment-14154228
] 

Aaron Davidson commented on SPARK-1860:
---------------------------------------

Your logic SGTM, but I would add one additional check to avoid deleting the directory for
an Application which still has running Executors on that node, just to make absolutely sure
that we don't delete app directories that just happen to sit idle for a while. This check
can be performed by iterating over the "executors" map in Worker.scala and matching the appId
with the app directory's name.


> Standalone Worker cleanup should not clean up running executors
> ---------------------------------------------------------------
>
>                 Key: SPARK-1860
>                 URL: https://issues.apache.org/jira/browse/SPARK-1860
>             Project: Spark
>          Issue Type: Bug
>          Components: Deploy
>    Affects Versions: 1.0.0
>            Reporter: Aaron Davidson
>            Priority: Blocker
>
> The default values of the standalone worker cleanup code cleanup all application data
every 7 days. This includes jars that were added to any executors that happen to be running
for longer than 7 days, hitting streaming jobs especially hard.
> Executor's log/data folders should not be cleaned up if they're still running. Until
then, this behavior should not be enabled by default.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org


Mime
View raw message