spark-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Guru Medasani <>
Subject Re: java.lang.OutOfMemoryError: GC overhead limit exceeded
Date Tue, 27 Jan 2015 22:34:41 GMT
Hi Anthony,

What is the setting of the total amount of memory in MB that can be 
allocated to containers on your NodeManagers?


Can you check this above configuration in yarn-site.xml used by the node 
manager process?

-Guru Medasani

From:  Sandy Ryza <>
Date:  Tuesday, January 27, 2015 at 3:33 PM
To:  Antony Mayi <>
Cc:  "" <>
Subject:  Re: java.lang.OutOfMemoryError: GC overhead limit exceeded

Hi Antony,

If you look in the YARN NodeManager logs, do you see that it's killing the 
executors?  Or are they crashing for a different reason?


On Tue, Jan 27, 2015 at 12:43 PM, Antony Mayi 
<> wrote:

I am using spark.yarn.executor.memoryOverhead=8192 yet getting executors 
crashed with this error.

does that mean I have genuinely not enough RAM or is this matter of config 

other config options used:

running spark 1.2.0 as yarn-client on cluster of 10 nodes (the workload is 
ALS trainImplicit on ~15GB dataset)

thanks for any ideas,

View raw message