spark-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Michael Shtelma <>
Subject Running out of space on /tmp file system while running spark job on yarn because of size of blockmgr folder
Date Mon, 19 Mar 2018 16:59:44 GMT
Hi everybody,

I am running spark job on yarn, and my problem is that the blockmgr-*
folders are being created under
The size of this folder can grow to a significant size and does not
really fit into /tmp file system for one job, which makes a real
problem for my installation.
I have redefined hadoop.tmp.dir in core-site.xml and
yarn.nodemanager.local-dirs in yarn-site.xml pointing to other
location and expected that the block manager will create the files
there and not under /tmp, but this is not the case. The files are
created under /tmp.

I am wondering if there is a way to make spark not use /tmp at all and
configure it to create all the files somewhere else ?

Any assistance would be greatly appreciated!


To unsubscribe e-mail:

View raw message