hadoop-yarn-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Ravi Prakash <ravihad...@gmail.com>
Subject Re: Short peaks in container memory usage
Date Wed, 10 Aug 2016 00:48:54 GMT
Hi Jan!

Thanks for your contribution. In your approach what happens when a few
containers on a node are using "excessive" memory (so that total memory
used > RAM available on the machine). Do you have overcommit enabled?


On Tue, Aug 9, 2016 at 1:31 AM, Jan Lukavsk√Ĺ <jan.lukavsky@firma.seznam.cz>

> Hello community,
> I have a question about container resource calculation in nodemanager.
> Some time ago a filed JIRA https://issues.apache.org/jira/browse/YARN-4681,
> which I though might address our problems with container being killed
> because of read-only mmaping memory block. The JIRA has not been resolved
> yet, but it turned out for us, that the patch doesn't solve the problem.
> Some applications (namely Apache Spark) tend to allocate really large
> memory blocks outside JVM heap (using mmap, but with MAP_PRIVATE), but only
> for short time periods. We solved this by creating a smoothing resource
> calculator, which averages the memory usage of a container over some time
> period (say 5 minutes). This eliminates the problem of container being
> killed for short memory consumption peak, but in the same time preserves
> the ability to kill container that *really* consumes excessive amount of
> memory.
> My question is, does this seem a systematic approach to you and should I
> post our patch to the community or am thinking in a wrong direction from
> the beginning? :)
> Thanks for reactions,
>  Jan
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: yarn-dev-unsubscribe@hadoop.apache.org
> For additional commands, e-mail: yarn-dev-help@hadoop.apache.org

  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message