spark-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Guru Medasani <gdm...@outlook.com>
Subject Re: java.lang.OutOfMemoryError: GC overhead limit exceeded
Date Tue, 27 Jan 2015 22:34:41 GMT
Hi Anthony,

What is the setting of the total amount of memory in MB that can be 
allocated to containers on your NodeManagers?

yarn.nodemanager.resource.memory-mb

Can you check this above configuration in yarn-site.xml used by the node 
manager process?

-Guru Medasani

From:  Sandy Ryza <sandy.ryza@cloudera.com>
Date:  Tuesday, January 27, 2015 at 3:33 PM
To:  Antony Mayi <antonymayi@yahoo.com>
Cc:  "user@spark.apache.org" <user@spark.apache.org>
Subject:  Re: java.lang.OutOfMemoryError: GC overhead limit exceeded

Hi Antony,

If you look in the YARN NodeManager logs, do you see that it's killing the 
executors?  Or are they crashing for a different reason?

-Sandy

On Tue, Jan 27, 2015 at 12:43 PM, Antony Mayi 
<antonymayi@yahoo.com.invalid> wrote:
Hi,

I am using spark.yarn.executor.memoryOverhead=8192 yet getting executors 
crashed with this error.

does that mean I have genuinely not enough RAM or is this matter of config 
tuning?

other config options used:
spark.storage.memoryFraction=0.3
SPARK_EXECUTOR_MEMORY=14G

running spark 1.2.0 as yarn-client on cluster of 10 nodes (the workload is 
ALS trainImplicit on ~15GB dataset)

thanks for any ideas,
Antony.



Mime
View raw message