spark-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Jörn Franke <jornfra...@gmail.com>
Subject Re: Spark Yarn executor container memory
Date Tue, 16 Aug 2016 05:34:03 GMT
Both are part of the heap.

> On 16 Aug 2016, at 04:26, Lan Jiang <ljiang2@gmail.com> wrote:
> 
> Hello,
> 
> My understanding is that YARN executor container memory is based on "spark.executor.memory"
+ “spark.yarn.executor.memoryOverhead”. The first one is for heap memory and second one
is for offheap memory. The spark.executor.memory is used by -Xmx to set the max heap size.
Now my question is why it does not count permgen size and memory used by stack. They are not
part of the max heap size. IMHO, YARN executor container memory should be set to:  spark.executor.memory
 + [-XX:MaxPermSize] + number_of_threads * [-Xss] + spark.yarn.executor.memoryOverhead. What
did I miss?
> 
> Lan
> ---------------------------------------------------------------------
> To unsubscribe e-mail: user-unsubscribe@spark.apache.org
> 

---------------------------------------------------------------------
To unsubscribe e-mail: user-unsubscribe@spark.apache.org


Mime
View raw message