spark-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Deenbandhu Agarwal (JIRA)" <>
Subject [jira] [Commented] (SPARK-19644) Memory leak in Spark Streaming
Date Mon, 20 Feb 2017 14:40:44 GMT


Deenbandhu Agarwal commented on SPARK-19644:

Sorry for the delayed response.

No, I didn't run in spark shell. I ran using spark submit in client deploy mode on a standalone
spark cluster.
I ran eclipse MAT on heap dump and attached the screenshot of dominator tree. I hope this
will help you out to find the cause of memory leak. 

Also attached Path to GC root of the object of `scala.reflect.runtime.JavaUniverse` (for smaller
heap dump taken at the application start).

When I checked GC root for an object of `scala.collection.immutable.$colon$colon` the path
contains the same object(`scala.reflect.runtime.JavaUniverse`)

> Memory leak in Spark Streaming
> ------------------------------
>                 Key: SPARK-19644
>                 URL:
>             Project: Spark
>          Issue Type: Bug
>          Components: DStreams
>    Affects Versions: 2.0.2
>         Environment: 3 AWS EC2 c3.xLarge
> Number of cores - 3
> Number of executors 3 
> Memory to each executor 2GB
>            Reporter: Deenbandhu Agarwal
>            Priority: Critical
>              Labels: memory_leak, performance
>         Attachments: Dominator_tree.png, heapdump.png, Path2GCRoot.png
> I am using streaming on the production for some aggregation and fetching data from cassandra
and saving data back to cassandra. 
> I see a gradual increase in old generation heap capacity from 1161216 Bytes to 1397760
Bytes over a period of six hours.
> After 50 hours of processing instances of class scala.collection.immutable.$colon$colon
incresed to 12,811,793 which is a huge number. 
> I think this is a clear case of memory leak

This message was sent by Atlassian JIRA

To unsubscribe, e-mail:
For additional commands, e-mail:

View raw message