spark-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Peter Rudenko <>
Subject Re: How to restrict disk space for spark caches on yarn?
Date Mon, 13 Jul 2015 13:57:13 GMT
Hi Andrew, here's what i found. Maybe would be relevant for people with 
the same issue:

1) There's 3 types of local resources in YARN (public, private, 
application). More about it here:

2) Spark cache is of application type of resource.

3) Currently it's not possible to specify quota for application 
resources (

4) The only it's possible to specify these 2 settings:
- The maximum percentage of disk space utilization allowed after which a 
disk is marked as bad. Values can range from 0.0 to 100.0. If the value 
is greater than or equal to 100, the nodemanager will check for full 
disk. This applies to yarn-nodemanager.local-dirs and 

yarn.nodemanager.disk-health-checker.min-free-space-per-disk-mb - The 
minimum space that must be available on a disk for it to be used. This 
applies to yarn-nodemanager.local-dirs and yarn.nodemanager.log-dirs.

5) Yarn's cache cleanup doesn't cleaned app resources:

As i understood application resources cleaned when spark application 
correctly terminates (using sc.stop()). But in my case when it fills all 
disk space it was stucked and couldn't stop correctly. After i restarted 
yarn i don't know how easily trigger cache cleanup except of manually on 
all the nodes.

Peter Rudenko

On 2015-07-10 20:07, Andrew Or wrote:
> Hi Peter,
> AFAIK Spark assumes infinite disk space, so there isn't really a way 
> to limit how much space it uses. Unfortunately I'm not aware of a 
> simpler workaround than to simply provision your cluster with more 
> disk space. By the way, are you sure that it's disk space that 
> exceeded the limit, but not the number of inodes? If it's the latter, 
> maybe you could control the ulimit of the container.
> To answer your other question: if it can't persist to disk then yes it 
> will fail. It will only recompute from the data source if for some 
> reason someone evicted our blocks from memory, but that shouldn't 
> happen in your case since your'e using MEMORY_AND_DISK_SER.
> -Andrew
> 2015-07-10 3:51 GMT-07:00 Peter Rudenko < 
> <>>:
>     Hi, i have a spark ML worklflow. It uses some persist calls. When
>     i launch it with 1 tb dataset - it puts down all cluster, becauses
>     it fills all disk space at /yarn/nm/usercache/root/appcache:
>     I found a yarn settings:
>     /yarn/.nodemanager.localizer./cache/.target-size-mb - Target size
>     of localizer cache in MB, per nodemanager. It is a target
>     retention size that only includes resources with PUBLIC and
>     PRIVATE visibility and excludes resources with APPLICATION visibility
>     But it excludes resources with APPLICATION visibility, and spark
>     cache as i understood is of APPLICATION type.
>     Is it possible to restrict a disk space for spark application?
>     Will spark fail if it wouldn't be able to persist on disk
>     (StorageLevel.MEMORY_AND_DISK_SER) or it would recompute from data
>     source?
>     Thanks,
>     Peter Rudenko

View raw message