Hi,

I am using spark.yarn.executor.memoryOverhead=8192 yet getting executors crashed with this error.

does that mean I have genuinely not enough RAM or is this matter of config tuning?

other config options used:
spark.storage.memoryFraction=0.3
SPARK_EXECUTOR_MEMORY=14G

running spark 1.2.0 as yarn-client on cluster of 10 nodes (the workload is ALS trainImplicit on ~15GB dataset)

thanks for any ideas,
Antony.