spark-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Tarun Garg <bigdat...@live.com>
Subject YARN worker out of disk memory
Date Fri, 26 Jun 2015 17:41:20 GMT
Hi,

I am running a spark job over yarn, after 2-3 hr execution workers start
dieing and i found that a lot of file at
/tmp/hadoop-root/nm-local-dir/usercache/root/appcache/application_1435184713615_0008/blockmgr-333f0ade-2474-43a6-9960-f08a15bcc7b7/3f
named temp_shuffle. 
my job is kakfastream.map().map().cache() on this i have three different
processing
1 is foreachRDD()
2 is mapToPair().reduceByKey().foreachRDD()
3 is flatMapToPair().groupByKeyAndWindow().map().foreachRDD()

in this i am shuffling two times but the question 1. after each shuffle data
deleted from the disk, this is what my understanding then why it is not
getting deleted this time? 2. as i am configuring the environment so kill
the process very often is that leaves the data over the disk?

any thoughts on this.

Thanks
Tarun  



--
View this message in context: http://apache-spark-user-list.1001560.n3.nabble.com/YARN-worker-out-of-disk-memory-tp23510.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.

---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscribe@spark.apache.org
For additional commands, e-mail: user-help@spark.apache.org


Mime
View raw message