spark-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Vadim Semenov <va...@datadoghq.com.INVALID>
Subject Re: Driver OutOfMemoryError in MapOutputTracker$.serializeMapStatuses for 40 TB shuffle.
Date Mon, 11 Nov 2019 15:42:58 GMT
There's an umbrella ticket for various 2GB limitations
https://issues.apache.org/jira/browse/SPARK-6235

On Fri, Nov 8, 2019 at 4:11 PM Jacob Lynn <abeboparebop@gmail.com> wrote:
>
> Sorry for the noise, folks! I understand that reducing the number of partitions works
around the issue (at the scale I'm working at, anyway) -- as I mentioned in my initial email
-- and I understand the root cause. I'm not looking for advice on how to resolve my issue.
I'm just pointing out that this is a real bug/limitation that impacts real-world use cases,
in case there is some proper Spark dev out there who is looking for a problem to solve.
>
> On Fri, Nov 8, 2019 at 2:24 PM Vadim Semenov <vadim@datadoghq.com.invalid> wrote:
>>
>> Basically, the driver tracks partitions and sends it over to
>> executors, so what it's trying to do is to serialize and compress the
>> map but because it's so big, it goes over 2GiB and that's Java's limit
>> on the max size of byte arrays, so the whole thing drops.
>>
>> The size of data doesn't matter here much but the number of partitions
>> is what the root cause of the issue, try reducing it below 30000 and
>> see how it goes.
>>
>> On Fri, Sep 7, 2018 at 10:35 AM Harel Gliksman <harelglik@gmail.com> wrote:
>> >
>> > Hi,
>> >
>> > We are running a Spark (2.3.1) job on an EMR cluster with 500 r3.2xlarge (60
GB, 8 vcores, 160 GB SSD ). Driver memory is set to 25GB.
>> >
>> > It processes ~40 TB of data using aggregateByKey in which we specify numPartitions
= 300,000.
>> > Map side tasks succeed, but reduce side tasks all fail.
>> >
>> > We notice the following driver error:
>> >
>> > 18/09/07 13:35:03 WARN Utils: Suppressing exception in finally: null
>> >
>> >  java.lang.OutOfMemoryError
>> >
>> > at java.io.ByteArrayOutputStream.hugeCapacity(ByteArrayOutputStream.java:123)
>> > at java.io.ByteArrayOutputStream.grow(ByteArrayOutputStream.java:117)
>> > at java.io.ByteArrayOutputStream.ensureCapacity(ByteArrayOutputStream.java:93)
>> > at java.io.ByteArrayOutputStream.write(ByteArrayOutputStream.java:153)
>> > at java.util.zip.DeflaterOutputStream.deflate(DeflaterOutputStream.java:253)
>> > at java.util.zip.DeflaterOutputStream.write(DeflaterOutputStream.java:211)
>> > at java.util.zip.GZIPOutputStream.write(GZIPOutputStream.java:145)
>> > at java.io.ObjectOutputStream$BlockDataOutputStream.writeBlockHeader(ObjectOutputStream.java:1894)
>> > at java.io.ObjectOutputStream$BlockDataOutputStream.drain(ObjectOutputStream.java:1875)
>> > at java.io.ObjectOutputStream$BlockDataOutputStream.flush(ObjectOutputStream.java:1822)
>> > at java.io.ObjectOutputStream.flush(ObjectOutputStream.java:719)
>> > at java.io.ObjectOutputStream.close(ObjectOutputStream.java:740)
>> > at org.apache.spark.MapOutputTracker$$anonfun$serializeMapStatuses$2.apply$mcV$sp(MapOutputTracker.scala:790)
>> > at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1389)
>> > at org.apache.spark.MapOutputTracker$.serializeMapStatuses(MapOutputTracker.scala:789)
>> > at org.apache.spark.ShuffleStatus.serializedMapStatus(MapOutputTracker.scala:174)
>> > at org.apache.spark.MapOutputTrackerMaster$MessageLoop.run(MapOutputTracker.scala:397)
>> > at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>> > at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>> > at java.lang.Thread.run(Thread.java:748)
>> > Exception in thread "map-output-dispatcher-0" java.lang.OutOfMemoryError
>> > at java.io.ByteArrayOutputStream.hugeCapacity(ByteArrayOutputStream.java:123)
>> > at java.io.ByteArrayOutputStream.grow(ByteArrayOutputStream.java:117)
>> > at java.io.ByteArrayOutputStream.ensureCapacity(ByteArrayOutputStream.java:93)
>> > at java.io.ByteArrayOutputStream.write(ByteArrayOutputStream.java:153)
>> > at java.util.zip.DeflaterOutputStream.deflate(DeflaterOutputStream.java:253)
>> > at java.util.zip.DeflaterOutputStream.write(DeflaterOutputStream.java:211)
>> > at java.util.zip.GZIPOutputStream.write(GZIPOutputStream.java:145)
>> > at java.io.ObjectOutputStream$BlockDataOutputStream.writeBlockHeader(ObjectOutputStream.java:1894)
>> > at java.io.ObjectOutputStream$BlockDataOutputStream.drain(ObjectOutputStream.java:1875)
>> > at java.io.ObjectOutputStream$BlockDataOutputStream.setBlockDataMode(ObjectOutputStream.java:1786)
>> > at java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.java:1189)
>> > at java.io.ObjectOutputStream.writeArray(ObjectOutputStream.java:1378)
>> > at java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.java:1174)
>> > at java.io.ObjectOutputStream.writeObject(ObjectOutputStream.java:348)
>> > at org.apache.spark.MapOutputTracker$$anonfun$serializeMapStatuses$1.apply$mcV$sp(MapOutputTracker.scala:787)
>> > at org.apache.spark.MapOutputTracker$$anonfun$serializeMapStatuses$1.apply(MapOutputTracker.scala:786)
>> > at org.apache.spark.MapOutputTracker$$anonfun$serializeMapStatuses$1.apply(MapOutputTracker.scala:786)
>> > at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1380)
>> > at org.apache.spark.MapOutputTracker$.serializeMapStatuses(MapOutputTracker.scala:789)
>> > at org.apache.spark.ShuffleStatus.serializedMapStatus(MapOutputTracker.scala:174)
>> > at org.apache.spark.MapOutputTrackerMaster$MessageLoop.run(MapOutputTracker.scala:397)
>> > at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>> > at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>> > at java.lang.Thread.run(Thread.java:748)
>> > Suppressed: java.lang.OutOfMemoryError
>> > at java.io.ByteArrayOutputStream.hugeCapacity(ByteArrayOutputStream.java:123)
>> > at java.io.ByteArrayOutputStream.grow(ByteArrayOutputStream.java:117)
>> > at java.io.ByteArrayOutputStream.ensureCapacity(ByteArrayOutputStream.java:93)
>> > at java.io.ByteArrayOutputStream.write(ByteArrayOutputStream.java:153)
>> > at java.util.zip.DeflaterOutputStream.deflate(DeflaterOutputStream.java:253)
>> > at java.util.zip.DeflaterOutputStream.write(DeflaterOutputStream.java:211)
>> > at java.util.zip.GZIPOutputStream.write(GZIPOutputStream.java:145)
>> > at java.io.ObjectOutputStream$BlockDataOutputStream.writeBlockHeader(ObjectOutputStream.java:1894)
>> > at java.io.ObjectOutputStream$BlockDataOutputStream.drain(ObjectOutputStream.java:1875)
>> > at java.io.ObjectOutputStream$BlockDataOutputStream.flush(ObjectOutputStream.java:1822)
>> > at java.io.ObjectOutputStream.flush(ObjectOutputStream.java:719)
>> > at java.io.ObjectOutputStream.close(ObjectOutputStream.java:740)
>> > at org.apache.spark.MapOutputTracker$$anonfun$serializeMapStatuses$2.apply$mcV$sp(MapOutputTracker.scala:790)
>> > at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1389)
>> > ... 6 more
>> >
>> > We found this reference (https://issues.apache.org/jira/browse/SPARK-1239) to
a similar issue that was closed in 2016.
>> >
>> > Please advise,
>> >
>> > Harel.
>> >
>> >
>>
>>
>> --
>> Sent from my iPhone
>>
>> ---------------------------------------------------------------------
>> To unsubscribe e-mail: user-unsubscribe@spark.apache.org
>>


-- 
Sent from my iPhone

---------------------------------------------------------------------
To unsubscribe e-mail: user-unsubscribe@spark.apache.org


Mime
View raw message