spark-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Koert Kuipers <ko...@tresata.com>
Subject spark.yarn.executor.memoryOverhead
Date Wed, 23 Nov 2016 19:01:29 GMT
in YarnAllocator i see that memoryOverhead is by default set to
math.max((MEMORY_OVERHEAD_FACTOR * executorMemory).toInt,
MEMORY_OVERHEAD_MIN))

this does not take into account spark.memory.offHeap.size i think. should
it?

something like:

math.max((MEMORY_OVERHEAD_FACTOR * executorMemory + offHeapMemory).toInt,
MEMORY_OVERHEAD_MIN))

Mime
View raw message