spark-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Mich Talebzadeh <>
Subject Re: org.apache.spark.util.SparkUncaughtExceptionHandler
Date Thu, 10 Oct 2019 21:04:20 GMT
Hi Nimmi,

Can you send us the spark parameters with overhead. assuming you are
running with yarn


[4] - 864GB

--num-executors 32

--executor-memory 21G

--executor-cores 4
--conf spark.yarn.executor.memoryOverhead=3000

 The parameter spark.yarn.executor.memoryOverhead is explained as below:

 spark.yarn.executor.memoryOverhead = executorMemory * 0.10, with minimum
of 384

 The amount of off-heap memory (in megabytes) to be allocated per executor.
This is memory that accounts for things like VM overheads, interned
strings, other native overheads, etc. This tends to grow with the executor
size (typically


Dr Mich Talebzadeh

LinkedIn *

*Disclaimer:* Use it at your own risk. Any and all responsibility for any
loss, damage or destruction of data or any other property which may arise
from relying on this email's technical content is explicitly disclaimed.
The author will in no case be liable for any monetary damages arising from
such loss, damage or destruction.

On Thu, 10 Oct 2019 at 21:39, Nimmi Cv <> wrote:

> 0
> I get the following error on executors while running my spark job. I am
> reading data from Database. The data has string in UTF8
> Iterator"short_name"));
> ERROR org.apache.spark.util.SparkUncaughtExceptionHandler - Uncaught
> exception in thread Thread[Executor task launch worker for task 359,5,main]
> java.lang.OutOfMemoryError: Java heap space at
> org.apache.spark.unsafe.types.UTF8String.fromAddress(
> at
> org.apache.spark.sql.catalyst.expressions.UnsafeRow.getUTF8String(
> at
> org.apache.spark.sql.execution.columnar.STRING$.getField(ColumnType.scala:452)
> at
> org.apache.spark.sql.execution.columnar.STRING$.getField(ColumnType.scala:424)
> at
> org.apache.spark.sql.execution.columnar.compression.RunLengthEncoding$Encoder.gatherCompressibilityStats(compressionSchemes.scala:194)
> at
> org.apache.spark.sql.execution.columnar.compression.CompressibleColumnBuilder$$anonfun$gatherCompressibilityStats$1.apply(CompressibleColumnBuilder.scala:74)
> at
> org.apache.spark.sql.execution.columnar.compression.CompressibleColumnBuilder$$anonfun$gatherCompressibilityStats$1.apply(CompressibleColumnBuilder.scala:74)
> at scala.collection.immutable.List.foreach(List.scala:392) at
> org.apache.spark.sql.execution.columnar.compression.CompressibleColumnBuilder$class.gatherCompressibilityStats(CompressibleColumnBuilder.scala:74)
> I am processing 100 GB of data with 10 executors of 14G. I startted with
> 12G executors and I get the same error even with 14G and 3G Overhead memory.
> Thanks,
> Nimmi

View raw message