spark-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Patrick Wendell (JIRA)" <j...@apache.org>
Subject [jira] [Resolved] (SPARK-1572) Uncaught IO exceptions in Pyspark kill Executor
Date Wed, 23 Apr 2014 21:48:15 GMT

     [ https://issues.apache.org/jira/browse/SPARK-1572?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]

Patrick Wendell resolved SPARK-1572.
------------------------------------

       Resolution: Fixed
    Fix Version/s: 1.0.0

> Uncaught IO exceptions in Pyspark kill Executor
> -----------------------------------------------
>
>                 Key: SPARK-1572
>                 URL: https://issues.apache.org/jira/browse/SPARK-1572
>             Project: Spark
>          Issue Type: Bug
>          Components: PySpark
>    Affects Versions: 1.0.0, 0.9.1
>            Reporter: Aaron Davidson
>            Assignee: Aaron Davidson
>             Fix For: 1.0.0
>
>
> If an exception is thrown in the Python "stdin writer" thread during this line:
> {code}
> PythonRDD.writeIteratorToStream(parent.iterator(split, context), dataOut)
> {code}
> (e.g., while reading from an HDFS source) then the exception will be handled by the default
ThreadUncaughtExceptionHandler, which is set in Executor. The default behavior is, unfortunately,
to call System.exit().
> Ideally, normal exceptions while running a task should not bring down all the executors
of a Spark cluster.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

Mime
View raw message