spark-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Josh Rosen (JIRA)" <>
Subject [jira] [Resolved] (SPARK-17485) Failed remote cached block reads can lead to whole job failure
Date Mon, 12 Sep 2016 22:45:20 GMT


Josh Rosen resolved SPARK-17485.
       Resolution: Fixed
    Fix Version/s: 2.1.0

Issue resolved by pull request 15037

> Failed remote cached block reads can lead to whole job failure
> --------------------------------------------------------------
>                 Key: SPARK-17485
>                 URL:
>             Project: Spark
>          Issue Type: Improvement
>          Components: Block Manager
>    Affects Versions: 1.6.2, 2.0.0
>            Reporter: Josh Rosen
>            Assignee: Josh Rosen
>            Priority: Critical
>             Fix For: 2.0.1, 2.1.0
> In Spark's RDD.getOrCompute we first try to read a local copy of a cached block, then
a remote copy, and only fall back to recomputing the block if no cached copy (local or remote)
can be read. This logic works correctly in the case where no remote copies of the block exist,
but if there _are_ remote copies but reads of those copies fail (due to network issues or
internal Spark bugs) then the BlockManager will throw a {{BlockFetchException}} error that
fails the entire job.
> In the case of torrent broadcast we really _do_ want to fail the entire job in case no
remote blocks can be fetched, but this logic is inappropriate for cached blocks because those
can/should be recomputed.
> Therefore, I think that this exception should be thrown higher up the call stack by the
BlockManager client code and not the block manager itself.

This message was sent by Atlassian JIRA

To unsubscribe, e-mail:
For additional commands, e-mail:

View raw message