spark-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Yi Tian (JIRA)" <>
Subject [jira] [Commented] (SPARK-3687) Spark hang while processing more than 100 sequence files
Date Wed, 01 Oct 2014 15:02:33 GMT


Yi Tian commented on SPARK-3687:


> Spark hang while processing more than 100 sequence files
> --------------------------------------------------------
>                 Key: SPARK-3687
>                 URL:
>             Project: Spark
>          Issue Type: Bug
>          Components: Spark Core
>    Affects Versions: 1.0.2, 1.1.0
>            Reporter: Ziv Huang
> In my application, I read more than 100 sequence files to a JavaPairRDD, perform flatmap
to get another JavaRDD, and then use takeOrdered to get the result.
> It is quite often (but not always) that the spark hangs while the executing some of 120th-150th
> In 1.0.2, the job can hang for several hours, maybe forever (I can't wait for its completion).
> When the spark job hangs,  I can't kill the job from web UI.
> In 1.1.0, the job hangs for couple mins (3.x mins actually),
> and then web UI of spark master shows that the job is finished with state "FAILED".
> In addition, the job stage web UI still hangs, and execution duration time is still accumulating.
> For both 1.0.2 and 1.1.0, the job hangs with no error messages in anywhere.
> The current workaround is to use coalesce to reduce the number of partitions to be processed.
> I never get a job hanged if the number of partitions to be processed is no greater than

This message was sent by Atlassian JIRA

To unsubscribe, e-mail:
For additional commands, e-mail:

View raw message