spark-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Corey Nolet <>
Subject Re: Shuffle memory woes
Date Sat, 06 Feb 2016 18:13:10 GMT

Thank you for the response but unfortunately, the problem I'm referring to
goes beyond this. I have set the shuffle memory fraction to be 90% and set
the cache memory to be 0. Repartitioning the RDD helped a tad on the map
side but didn't do much for the spilling when there was no longer any
memory left for the shuffle. Also the new auto-memory management doesn't
seem like it'll have too much of an effect after i've already given most
the memory i've allocated to the shuffle. The problem I'm having is most
specifically related to the shuffle performing declining by several orders
of magnitude when it needs to spill multiple times (it ends up spilling
several hundred for me when it can't fit stuff into memory).

On Sat, Feb 6, 2016 at 6:40 AM, Igor Berman <> wrote:

> Hi,
> usually you can solve this by 2 steps
> make rdd to have more partitions
> play with shuffle memory fraction
> in spark 1.6 cache vs shuffle memory fractions are adjusted automatically
> On 5 February 2016 at 23:07, Corey Nolet <> wrote:
>> I just recently had a discovery that my jobs were taking several hours to
>> completely because of excess shuffle spills. What I found was that when I
>> hit the high point where I didn't have enough memory for the shuffles to
>> store all of their file consolidations at once, it could spill so many
>> times that it causes my job's runtime to increase by orders of magnitude
>> (and sometimes fail altogether).
>> I've played with all the tuning parameters I can find. To speed the
>> shuffles up, I tuned the akka threads to different values. I also tuned the
>> shuffle buffering a tad (both up and down).
>> I feel like I see a weak point here. The mappers are sharing memory space
>> with reducers and the shuffles need enough memory to consolidate and pull
>> otherwise they will need to spill and spill and spill. What i've noticed
>> about my jobs is that this is a difference between them taking 30 minutes
>> and 4 hours or more. Same job- just different memory tuning.
>> I've found that, as a result of the spilling, I'm better off not caching
>> any data in memory and lowering my storage fraction to 0 and still hoping I
>> was able to give my shuffles enough memory that my data doesn't
>> continuously spill. Is this the way it's supposed to be? It makes it hard
>> because it seems like it forces the memory limits on my job- otherwise it
>> could take orders of magnitude longer to execute.

View raw message