spark-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "George Papa (Jira)" <j...@apache.org>
Subject [jira] [Issue Comment Deleted] (SPARK-29321) Possible memory leak in Spark
Date Tue, 01 Oct 2019 20:42:00 GMT

     [ https://issues.apache.org/jira/browse/SPARK-29321?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]

George Papa updated SPARK-29321:
--------------------------------
    Comment: was deleted

(was: I run the code in the snippet (I test it without any sleeping time, in order to see
the results faster) and I have recorded the JVM memory usage for approximately 1 hour between
Spark 2.4.4 and your branch with your patch.
 Spark JVM memory with Spark 2.4.4:
||Time||RES||SHR||MEM%||
|1min|{color:#de350b}1349{color}|32724|1.5|
|3min|{color:#de350b}1936{color}|32724|2.2|
|5min|{color:#de350b}2506{color}|32724|2.6|
|7min|{color:#de350b}2564{color}|32724|2.7|
|9min|{color:#de350b}2584{color}|32724|2.7|
|11min|{color:#de350b}2585{color}|32724|2.7|
|13min|{color:#de350b}2592{color}|32724|2.7|
|15min|{color:#de350b}2591{color}|32724|2.7|
|17min|{color:#de350b}2591{color}|32724|2.7|
|30min|{color:#de350b}2600{color}|32724|2.7|
|1h|{color:#de350b}2618{color}|32724|2.7|

 

Spark JVM memory with Spark patch([GitHub Pull Request #25973|https://github.com/apache/spark/pull/25973])
||Time||RES||SHR||MEM%||
|1min|{color:#de350b}1134{color}|25380|1.4|
|3min|{color:#de350b}1520{color}|25380|1.6|
|5min|{color:#de350b}1570{color}|25380|1.6|
|7min|{color:#de350b}1598{color}|25380|1.7|
|9min|{color:#de350b}1613{color}|25380|1.7|
|11min|{color:#de350b}1616{color}|25380|1.7|
|15min|{color:#de350b}1620{color}|25380|1.7|
|17min|{color:#de350b}1625{color}|25380|1.7|
|30min|{color:#de350b}1629{color}|25380|1.7|
|1h|{color:#de350b}1660{color}|25380|1.7|

 

As you can see the RES memory is slightly increasing in both cases overtime. Also, when I
tested with a real streaming application in a testing env after hours, the persisted dataframes
overflows the memory and spill to disk.

*NOTE:* You can easily reproduce the above behavior, by running the snippet code (I prefer
to run without any sleeping delay) and track the JVM memory with top or htop command.

 )

> Possible memory leak in Spark
> -----------------------------
>
>                 Key: SPARK-29321
>                 URL: https://issues.apache.org/jira/browse/SPARK-29321
>             Project: Spark
>          Issue Type: Bug
>          Components: Spark Core
>    Affects Versions: 2.3.3
>            Reporter: George Papa
>            Priority: Major
>
> This issue is a clone of the (SPARK-29055). After Spark version 2.3.3, I{color:#172b4d}
observe that the JVM memory is increasing slightly overtime. This behavior also affects the
application performance because when I run my real application in testing environment, after
a while the persisted dataframes stop fitting into the executors memory and I have spill to
disk.{color}
> {color:#172b4d}JVM memory usage (based on htop command){color}
> ||Time||RES||SHR||MEM%||
> |1min|{color:#de350b}1349{color}|32724|1.5|
> |3min|{color:#de350b}1936{color}|32724|2.2|
> |5min|{color:#de350b}2506{color}|32724|2.6|
> |7min|{color:#de350b}2564{color}|32724|2.7|
> |9min|{color:#de350b}2584{color}|32724|2.7|
> |11min|{color:#de350b}2585{color}|32724|2.7|
> |13min|{color:#de350b}2592{color}|32724|2.7|
> |15min|{color:#de350b}2591{color}|32724|2.7|
> |17min|{color:#de350b}2591{color}|32724|2.7|
> |30min|{color:#de350b}2600{color}|32724|2.7|
> |1h|{color:#de350b}2618{color}|32724|2.7|
>  
> *HOW TO REPRODUCE THIS BEHAVIOR:*
>  Reproduce the above behavior, by running the snippet code (I prefer to run without
any sleeping delay) and track the JVM memory with top or htop command.
> {code:java}
> import time
> import os
> from pyspark.sql import SparkSession
> from pyspark.sql import functions as F
> from pyspark.sql import types as T
> target_dir = "..."
> spark=SparkSession.builder.appName("DataframeCount").getOrCreate()
> while True:
>     for f in os.listdir(target_dir):
>         df = spark.read.load(target_dir + f, format="csv")
>         print("Number of records: {0}".format(df.count()))
>         time.sleep(15){code}
>  
> *TESTED CASES WITH THE SAME BEHAVIOUR:*
>  * I tested with default settings (spark-defaults.conf)
>  * Add spark.cleaner.periodicGC.interval 1min (or less)
>  * {{Turn spark.cleaner.referenceTracking.blocking}}=false
>  * Run the application in cluster mode
>  * Increase/decrease the resources of the executors and driver
>  * I tested with extraJavaOptions in driver and executor -XX:+UseG1GC -XX:InitiatingHeapOccupancyPercent=35
-XX:ConcGCThreads=12
>  * It is also tested with the Spark 2.4.4 (latest) and had the same behavior.
>   
> *DEPENDENCIES*
>  * Operation system: Ubuntu 16.04.3 LTS
>  * Java: jdk1.8.0_131 (tested also with jdk1.8.0_221)
>  * Python: Python 2.7.12



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org


Mime
View raw message