spark-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Jörn Franke <jornfra...@gmail.com>
Subject Re: Off Heap (Tungsten) Memory Usage / Management ?
Date Wed, 21 Sep 2016 22:41:53 GMT
All off-heap memory is still managed by the JVM process. If you limit the memory of this process
then you limit the memory. I think the memory of the JVM process could be limited via the
xms/xmx parameter of the JVM. This can be configured via spark options for yarn (be aware
that they are different in cluster and client mode), but i recommend to use the spark options
for the off heap maximum.

https://spark.apache.org/docs/latest/running-on-yarn.html


> On 21 Sep 2016, at 22:02, Michael Segel <msegel_hadoop@hotmail.com> wrote:
> 
> I’ve asked this question a couple of times from a friend who didn’t know
the answer… so I thought I would try here. 
> 
> 
> Suppose we launch a job on a cluster (YARN) and we have set up the containers to be 3GB
in size.
> 
> 
> What does that 3GB represent? 
> 
> I mean what happens if we end up using 2-3GB of off heap storage via tungsten? 
> What will Spark do? 
> Will it try to honor the container’s limits and throw an exception or will it
allow my job to grab that amount of memory and exceed YARN’s expectations since its
off heap? 
> 
> Thx
> 
> -Mike
> 
> B‹KKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKCB•È[œÝXœØÜšX™HK[XZ[ˆ\Ù\‹][œÝXœØÜšX™PÜ\šË˜\XÚK›Ü™ÃBƒ

Mime
View raw message