spark-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Michael Shtelma <>
Subject Re: Running out of space on /tmp file system while running spark job on yarn because of size of blockmgr folder
Date Mon, 19 Mar 2018 17:29:13 GMT
Hi Keith,

Thank you for your answer!
I have done this, and it is working for spark driver.
I would like to make something like this for the executors as well, so
that the setting will be used on all the nodes, where I have executors


On Mon, Mar 19, 2018 at 6:07 PM, Keith Chapman <> wrote:
> Hi Michael,
> You could either set spark.local.dir through spark conf or
> system property.
> Regards,
> Keith.
> On Mon, Mar 19, 2018 at 9:59 AM, Michael Shtelma <> wrote:
>> Hi everybody,
>> I am running spark job on yarn, and my problem is that the blockmgr-*
>> folders are being created under
>> /tmp/hadoop-msh/nm-local-dir/usercache/msh/appcache/application_id/*
>> The size of this folder can grow to a significant size and does not
>> really fit into /tmp file system for one job, which makes a real
>> problem for my installation.
>> I have redefined hadoop.tmp.dir in core-site.xml and
>> yarn.nodemanager.local-dirs in yarn-site.xml pointing to other
>> location and expected that the block manager will create the files
>> there and not under /tmp, but this is not the case. The files are
>> created under /tmp.
>> I am wondering if there is a way to make spark not use /tmp at all and
>> configure it to create all the files somewhere else ?
>> Any assistance would be greatly appreciated!
>> Best,
>> Michael
>> ---------------------------------------------------------------------
>> To unsubscribe e-mail:

To unsubscribe e-mail:

View raw message