spark-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Thangamani Murugasamy (Jira)" <j...@apache.org>
Subject [jira] [Commented] (SPARK-21181) Suppress memory leak errors reported by netty
Date Fri, 30 Aug 2019 21:44:00 GMT

    [ https://issues.apache.org/jira/browse/SPARK-21181?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16919903#comment-16919903
] 

Thangamani Murugasamy commented on SPARK-21181:
-----------------------------------------------

I have same problem in Spark 2.3

 

RROR util.ResourceLeakDetector: LEAK: ByteBuf.release() was not called before it's garbage-collected.
See http://netty.io/wiki/reference-counted-objects.html for more information.
Recent access records:
[Stage 0:===============================================>         (25 + 5) / 30]19/08/30
16:39:07 ERROR datasources.FileFormatWriter: Aborting job null.
java.util.concurrent.TimeoutException: Futures timed out after [300 seconds]
        at scala.concurrent.impl.Promise$DefaultPromise.ready(Promise.scala:219)
        at scala.concurrent.impl.Promise$DefaultPromise.result(Promise.scala:223)
        at org.apache.spark.util.ThreadUtils$.awaitResult(ThreadUtils.scala:201)
        at org.apache.spark.sql.execution.exchange.BroadcastExchangeExec.doExecuteBroadcast(BroadcastExchangeExec.scala:136)
        at org.apache.spark.sql.execution.SparkPlan$$anonfun$executeBroadcast$1.apply(SparkPlan.scala:144)
        at org.apache.spark.sql.execution.SparkPlan$$anonfun$executeBroadcast$1.apply(SparkPlan.scala:140)
        at org.apache.spark.sql.execution.SparkPlan$$anonfun$executeQuery$1.apply(SparkPlan.scala:155)

> Suppress memory leak errors reported by netty
> ---------------------------------------------
>
>                 Key: SPARK-21181
>                 URL: https://issues.apache.org/jira/browse/SPARK-21181
>             Project: Spark
>          Issue Type: Bug
>          Components: Input/Output
>    Affects Versions: 2.1.0
>            Reporter: Dhruve Ashar
>            Assignee: Dhruve Ashar
>            Priority: Minor
>             Fix For: 2.1.2, 2.2.0, 2.3.0
>
>
> We are seeing netty report memory leak erros like the one below after switching to 2.1.

> {code}
> ERROR ResourceLeakDetector: LEAK: ByteBuf.release() was not called before it's garbage-collected.
Enable advanced leak reporting to find out where the leak occurred. To enable advanced leak
reporting, specify the JVM option '-Dio.netty.leakDetection.level=advanced' or call ResourceLeakDetector.setLevel()
See http://netty.io/wiki/reference-counted-objects.html for more information.
> {code}
> Looking a bit deeper, Spark is not leaking any memory here, but it is confusing for the
user to see the error message in the driver logs. 
> After enabling, '-Dio.netty.leakDetection.level=advanced', netty reveals the SparkSaslServer
to be the source of these leaks.
> Sample trace :https://gist.github.com/dhruve/b299ebc35aa0a185c244a0468927daf1



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org


Mime
View raw message