ResourceLeakDetector doesn't seem to be from Spark.

Please check dependencies for potential leak.


On Tue, Jul 19, 2016 at 6:11 AM, Guruji <> wrote:
I am running a Spark Cluster on Mesos. The module reads data from Kafka as
DirectStream and pushes it into elasticsearch after referring to a redis for
getting Names against IDs.

I have been getting this message in my worker logs.

*16/07/19 11:17:44 ERROR ResourceLeakDetector: LEAK: You are creating too
many HashedWheelTimer instances.  HashedWheelTimer is a shared resource that
must be reused across the JVM,so that only a few instances are created.

Can't figure out the reason for the Resource Leak. Although when this
happens, the Batches start slowing down and the pending Queue increases.
There is hardly going back from there, other than killing it and starting it

Any idea why the resource leak? This message seems to be related to akka
when I googled. I am using Spark 1.6.2.

View this message in context:
Sent from the Apache Spark User List mailing list archive at

To unsubscribe e-mail: