spark-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Aaron Davidson (JIRA)" <>
Subject [jira] [Created] (SPARK-1582) Job cancellation does not interrupt threads
Date Wed, 23 Apr 2014 04:47:17 GMT
Aaron Davidson created SPARK-1582:

             Summary: Job cancellation does not interrupt threads
                 Key: SPARK-1582
             Project: Spark
          Issue Type: Bug
          Components: Spark Core
    Affects Versions: 1.0.0, 0.9.1
            Reporter: Aaron Davidson
            Assignee: Aaron Davidson

Cancelling Spark jobs is limited because executors that are blocked are not interrupted. In
effect, the cancellation will succeed and the job will no longer be "running", but executor
threads may still be tied up with the cancelled job and unable to do further work until complete.
This is particularly problematic in the case of deadlock or unlimited/long timeouts.

It would be useful if cancelling a job would call Thread.interrupt() in order to interrupt
blocking in most situations, such as Object monitors or IO. The one caveat is [HDFS-1208|],
where HDFS's DFSClient will not only swallow InterruptedException but may reinterpret them
as IOException, causing HDFS to mark a node as permanently failed. Thus, this feature must
be optional and probably off by default.

This message was sent by Atlassian JIRA

View raw message