spark-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Thangamani Murugasamy (Jira)" <>
Subject [jira] [Commented] (SPARK-21181) Suppress memory leak errors reported by netty
Date Fri, 30 Aug 2019 21:44:00 GMT


Thangamani Murugasamy commented on SPARK-21181:

I have same problem in Spark 2.3


RROR util.ResourceLeakDetector: LEAK: ByteBuf.release() was not called before it's garbage-collected.
See for more information.
Recent access records:
[Stage 0:===============================================>         (25 + 5) / 30]19/08/30
16:39:07 ERROR datasources.FileFormatWriter: Aborting job null.
java.util.concurrent.TimeoutException: Futures timed out after [300 seconds]
        at scala.concurrent.impl.Promise$DefaultPromise.ready(Promise.scala:219)
        at scala.concurrent.impl.Promise$DefaultPromise.result(Promise.scala:223)
        at org.apache.spark.util.ThreadUtils$.awaitResult(ThreadUtils.scala:201)
        at org.apache.spark.sql.execution.SparkPlan$$anonfun$executeBroadcast$1.apply(SparkPlan.scala:144)
        at org.apache.spark.sql.execution.SparkPlan$$anonfun$executeBroadcast$1.apply(SparkPlan.scala:140)
        at org.apache.spark.sql.execution.SparkPlan$$anonfun$executeQuery$1.apply(SparkPlan.scala:155)

> Suppress memory leak errors reported by netty
> ---------------------------------------------
>                 Key: SPARK-21181
>                 URL:
>             Project: Spark
>          Issue Type: Bug
>          Components: Input/Output
>    Affects Versions: 2.1.0
>            Reporter: Dhruve Ashar
>            Assignee: Dhruve Ashar
>            Priority: Minor
>             Fix For: 2.1.2, 2.2.0, 2.3.0
> We are seeing netty report memory leak erros like the one below after switching to 2.1.

> {code}
> ERROR ResourceLeakDetector: LEAK: ByteBuf.release() was not called before it's garbage-collected.
Enable advanced leak reporting to find out where the leak occurred. To enable advanced leak
reporting, specify the JVM option '-Dio.netty.leakDetection.level=advanced' or call ResourceLeakDetector.setLevel()
See for more information.
> {code}
> Looking a bit deeper, Spark is not leaking any memory here, but it is confusing for the
user to see the error message in the driver logs. 
> After enabling, '-Dio.netty.leakDetection.level=advanced', netty reveals the SparkSaslServer
to be the source of these leaks.
> Sample trace :

This message was sent by Atlassian Jira

To unsubscribe, e-mail:
For additional commands, e-mail:

View raw message