spark-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From mharwida <>
Subject Re: Spark writing to disk when there's enough memory?!
Date Mon, 20 Jan 2014 18:11:59 GMT

I've experimented with the parameters provided but we are still seeing the
same problem, data is still spilling to disk when there's clearly enough
memory on the worker nodes.

Please note that data is distributed equally amongst the 6 Hadoop nodes
(About 5GB per node).

Any workarounds or clues as to why this is still happening please?


View this message in context:
Sent from the Apache Spark User List mailing list archive at

View raw message