spark-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Zhiliang Zhu <>
Subject Re: Spark driver getting out of memory
Date Mon, 18 Jul 2016 10:37:56 GMT
try to set --drive-memory xg , x would be as large as can be set .  

    On Monday, July 18, 2016 6:31 PM, Saurav Sinha <> wrote:

I am running spark job.
Master memory - 5Gexecutor memort 10G(running on 4 node)
My job is getting killed as no of partition increase to 20K.
16/07/18 14:53:13 INFO DAGScheduler: Got job 17 (foreachPartition at
with 13524 output partitions (allowLocal=false)16/07/18 14:53:13 INFO DAGScheduler: Final
stage: ResultStage 640(foreachPartition at 14:53:13 INFO DAGScheduler:
Parents of final stage: List(ShuffleMapStage 518, ShuffleMapStage 639)16/07/18 14:53:23 INFO
DAGScheduler: Missing parents: List()16/07/18 14:53:23 INFO DAGScheduler: Submitting ResultStage
640 (MapPartitionsRDD[271] at map at, which has no missing parents16/07/18
14:53:23 INFO MemoryStore: ensureFreeSpace(8248) called with curMem=41923262, maxMem=277877882816/07/18
14:53:23 INFO MemoryStore: Block broadcast_90 stored as values in memory (estimated size 8.1
KB, free 2.5 GB)Exception in thread "dag-scheduler-event-loop" java.lang.OutOfMemoryError:
Java heap space        at 
      at org.xerial.snappy.SnappyOutputStream.dumpOutput( 
      at org.xerial.snappy.SnappyOutputStream.flush(     

Help needed. 

Thanks and Regards,
Saurav Sinha
Contact: 9742879062

View raw message