spark-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From saurabh guru <saurabh.g...@gmail.com>
Subject Spark ResourceLeak?
Date Tue, 19 Jul 2016 13:12:19 GMT
I am running a Spark Cluster on Mesos. The module reads data from Kafka as
DirectStream and pushes it into elasticsearch after referring to a redis
for getting Names against IDs.

I have been getting this message in my worker logs.


*16/07/19 11:17:44 ERROR ResourceLeakDetector: LEAK: You are creating too
many HashedWheelTimer instances.  HashedWheelTimer is a shared resource
that must be reused across the JVM,so that only a few instances are
created. *

Can't figure out the reason for the Resource Leak. Although when this
happens, the Batches start slowing down and the pending Queue increases.
There is hardly going back from there, other than killing it and starting
it again.

Any idea why the resource leak? This message seems to be related to akka
when I googled. I am using Spark 1.6.2.

-- 
Thanks,
Saurabh

Mime
View raw message