hadoop-yarn-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Jan Lukavsk√Ĺ <jan.lukav...@firma.seznam.cz>
Subject Short peaks in container memory usage
Date Tue, 09 Aug 2016 08:31:12 GMT
Hello community,

I have a question about container resource calculation in nodemanager. 
Some time ago a filed JIRA 
https://issues.apache.org/jira/browse/YARN-4681, which I though might 
address our problems with container being killed because of read-only 
mmaping memory block. The JIRA has not been resolved yet, but it turned 
out for us, that the patch doesn't solve the problem. Some applications 
(namely Apache Spark) tend to allocate really large memory blocks 
outside JVM heap (using mmap, but with MAP_PRIVATE), but only for short 
time periods. We solved this by creating a smoothing resource 
calculator, which averages the memory usage of a container over some 
time period (say 5 minutes). This eliminates the problem of container 
being killed for short memory consumption peak, but in the same time 
preserves the ability to kill container that *really* consumes excessive 
amount of memory.

My question is, does this seem a systematic approach to you and should I 
post our patch to the community or am thinking in a wrong direction from 
the beginning? :)


Thanks for reactions,

  Jan


---------------------------------------------------------------------
To unsubscribe, e-mail: yarn-dev-unsubscribe@hadoop.apache.org
For additional commands, e-mail: yarn-dev-help@hadoop.apache.org


Mime
View raw message