spark-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From mharwida <majdharw...@yahoo.com>
Subject Re: Spark writing to disk when there's enough memory?!
Date Mon, 20 Jan 2014 18:11:59 GMT
Hi,

I've experimented with the parameters provided but we are still seeing the
same problem, data is still spilling to disk when there's clearly enough
memory on the worker nodes.

Please note that data is distributed equally amongst the 6 Hadoop nodes
(About 5GB per node).

Any workarounds or clues as to why this is still happening please?

Thanks,
Majd



--
View this message in context: http://apache-spark-user-list.1001560.n3.nabble.com/Spark-writing-to-disk-when-there-s-enough-memory-tp502p678.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.

Mime
View raw message