spark-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Khanderao kand <khanderao.k...@gmail.com>
Subject Re: Problems with broadcast large datastructure
Date Sat, 11 Jan 2014 00:27:45 GMT
If your object size > 10MB you may need to change spark.akka.frameSize.

What is your spark, spark.akka.timeOut ?

did you change   spark.akka.heartbeat.interval  ?

BTW based on large size getting broadcasted across 25 nodes, you may
want to consider the frequency of such transfer and evaluate
alternative patterns.




On Tue, Jan 7, 2014 at 12:55 AM, Sebastian Schelter <ssc@apache.org> wrote:

> Spark repeatedly fails broadcast a large object on a cluster of 25
> machines for me.
>
> I get log messages like this:
>
> [spark-akka.actor.default-dispatcher-4] WARN
> org.apache.spark.storage.BlockManagerMasterActor - Removing BlockManager
> BlockManagerId(3, cloud-33.dima.tu-berlin.de, 42185, 0) with no recent
> heart beats: 134689ms exceeds 45000ms
>
> Is there something wrong with my config? Do I have to increase some
> timeout?
>
> Thx,
> Sebastian
>

Mime
View raw message