spark-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Jianshi Huang <jianshi.hu...@gmail.com>
Subject Re: SPARK-3106 fixed?
Date Mon, 13 Oct 2014 20:36:01 GMT
Turned out it was caused by this issue:
https://issues.apache.org/jira/browse/SPARK-3923

Set spark.akka.heartbeat.interval to 100 solved it.

Jianshi

On Mon, Oct 13, 2014 at 4:24 PM, Jianshi Huang <jianshi.huang@gmail.com>
wrote:

> Hmm... it failed again, just lasted a little bit longer.
>
> Jianshi
>
> On Mon, Oct 13, 2014 at 4:15 PM, Jianshi Huang <jianshi.huang@gmail.com>
> wrote:
>
>> https://issues.apache.org/jira/browse/SPARK-3106
>>
>> I'm having the saming errors described in SPARK-3106 (no other types of
>> errors confirmed), running a bunch sql queries on spark 1.2.0 built from
>> latest master HEAD.
>>
>> Any updates to this issue?
>>
>> My main task is to join a huge fact table with a dozen dim tables (using
>> HiveContext) and then map it to my class object. It failed a couple of
>> times and now I cached the intermediate table and currently it seems
>> working fine... no idea why until I found SPARK-3106
>>
>> Cheers,
>> --
>> Jianshi Huang
>>
>> LinkedIn: jianshi
>> Twitter: @jshuang
>> Github & Blog: http://huangjs.github.com/
>>
>
>
>
> --
> Jianshi Huang
>
> LinkedIn: jianshi
> Twitter: @jshuang
> Github & Blog: http://huangjs.github.com/
>



-- 
Jianshi Huang

LinkedIn: jianshi
Twitter: @jshuang
Github & Blog: http://huangjs.github.com/

Mime
View raw message