spark-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Barak Gitsis <bar...@similarweb.com>
Subject Re: About memory leak in spark 1.4.1
Date Sun, 02 Aug 2015 08:11:29 GMT
Hi,
reducing spark.storage.memoryFraction did the trick for me. Heap doesn't
get filled because it is reserved..
My reasoning is:
I give executor all the memory i can give it, so that makes it a boundary.
>From here i try to make the best use of memory I can.
storage.memoryFraction is in a sense user data space.  The rest can be used
by the system.
If you don't have so much data that you MUST store in memory for
performance, better give spark more space..
ended up setting it to 0.3

All that said, it is on spark 1.3 on cluster

hope that helps

On Sat, Aug 1, 2015 at 5:43 PM Sea <261810726@qq.com> wrote:

> Hi, all
> I upgrage spark to 1.4.1, many applications failed... I find the heap
> memory is not full , but the process of CoarseGrainedExecutorBackend will
> take more memory than I expect, and it will increase as time goes on,
> finally more than max limited of the server, the worker will die.....
>
> Any can help?
>
> Mode:standalone
>
> spark.executor.memory 50g
>
> 25583 xiaoju    20   0 75.5g  55g  28m S 1729.3 88.1   2172:52 java
>
> 55g more than 50g I apply
>
> --
*-Barak*

Mime
View raw message