spark-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Patrick Wendell (JIRA)" <>
Subject [jira] [Commented] (SPARK-3731) RDD caching stops working in pyspark after some time
Date Mon, 06 Oct 2014 04:48:34 GMT


Patrick Wendell commented on SPARK-3731:

[~davies] any chance you can take a look at this?

> RDD caching stops working in pyspark after some time
> ----------------------------------------------------
>                 Key: SPARK-3731
>                 URL:
>             Project: Spark
>          Issue Type: Bug
>          Components: PySpark, Spark Core
>    Affects Versions: 1.1.0
>         Environment: Linux, 32bit, both in local mode or in standalone cluster mode
>            Reporter: Milan Straka
>            Assignee: Davies Liu
>         Attachments: spark-3731.log,, spark-3731.txt.bz2, worker.log
> Consider a file F which when loaded with sc.textFile and cached takes up slightly more
than half of free memory for RDD cache.
> When in PySpark the following is executed:
>   1) a = sc.textFile(F)
>   2) a.cache().count()
>   3) b = sc.textFile(F)
>   4) b.cache().count()
> and then the following is repeated (for example 10 times):
>   a) a.unpersist().cache().count()
>   b) b.unpersist().cache().count()
> after some time, there are no RDD cached in memory.
> Also, since that time, no other RDD ever gets cached (the worker always reports something
like "WARN CacheManager: Not enough space to cache partition rdd_23_5 in memory! Free memory
is 277478190 bytes.", even if rdd_23_5 is ~50MB). The Executors tab of the Application Detail
UI shows that all executors have 0MB memory used (which is consistent with the CacheManager
> When doing the same in scala, everything works perfectly.
> I understand that this is a vague description, but I do no know how to describe the problem

This message was sent by Atlassian JIRA

To unsubscribe, e-mail:
For additional commands, e-mail:

View raw message