spark-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Ashok Kumar <>
Subject The parameter spark.yarn.executor.memoryOverhead
Date Mon, 30 Oct 2017 19:16:11 GMT
Hi Gurus,

The parameter spark.yarn.executor.memoryOverhead is explained as below:

executorMemory * 0.10, with minimum of 384
The amount of off-heap memory (in megabytes) to be allocated per executor. This is memory
that accounts for things like VM overheads, interned strings, other native overheads, etc.
This tends to grow with the executor size (typically 6-10%).                                                                                                                                                                                                                 

So does that mean that for executor of 10GB this should be ideally set to ~ 10% = 1GB?                                                                                                                                                                                        

What would happen if we set it higher to say 30% ~ 3GB.
What is this memory is exactly used for (as opposed to memory allocated to the executor)?

Thanking you
View raw message