spark-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Yong Zhang <>
Subject Re: [Worker Crashing] OutOfMemoryError: GC overhead limit execeeded
Date Fri, 24 Mar 2017 13:12:10 GMT
I am not 100% sure, but normally "dispatcher-event-loop" OOM means the driver OOM. Are you
sure your workers OOM?


From: bsikander <>
Sent: Friday, March 24, 2017 5:48 AM
Subject: [Worker Crashing] OutOfMemoryError: GC overhead limit execeeded

Spark version: 1.6.2
Hadoop: 2.6.0

All VMS are deployed on AWS.
1 Master (t2.large)
1 Secondary Master (t2.large)
5 Workers (m4.xlarge)
Zookeeper (t2.large)

Recently, 2 of our workers went down with out of memory exception.
java.lang.OutOfMemoryError: GC overhead limit exceeded (max heap: 1024 MB)

Both of these worker processes were in hanged state. We restarted them to
bring them back to normal state.

Here is the complete exception

Worker crashing<>
Worker crashing

Master's spark-default.conf file:

Default Configuration file for MASTER<>
Default Configuration file for MASTER


Slave's spark-default.conf file:

So, what could be the reason of our workers crashing due to OutOfMemory ?
How can we avoid that in future.

View this message in context:
Sent from the Apache Spark User List mailing list archive at

To unsubscribe e-mail:

View raw message