spark-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From mharwida <majdharw...@yahoo.com>
Subject Spark writing to disk when there's enough memory?!
Date Mon, 13 Jan 2014 12:24:44 GMT
Hi All,

I'm creating a cached table in memory via Shark using the command:
create table tablename_cached as select * from tablename;

Monitoring this via the Spark UI, I have noticed that data is being written
to disk when there's clearly enough available memory on 2 of the worker
nodes. Please refer to attached image. Cass4 and Cass3 have 3GB of available
memory yet the data is being written to disk on the worker nodes which have
used all their memory.

<http://apache-spark-user-list.1001560.n3.nabble.com/file/n502/1.jpg> 

<http://apache-spark-user-list.1001560.n3.nabble.com/file/n502/2.jpg> 

Could anyone shed a light on this please?

Thanks
Majd



--
View this message in context: http://apache-spark-user-list.1001560.n3.nabble.com/Spark-writing-to-disk-when-there-s-enough-memory-tp502.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.

Mime
View raw message