spark-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From ankits <ankitso...@gmail.com>
Subject Limit # of parallel parquet decompresses
Date Thu, 12 Mar 2015 23:07:04 GMT
My jobs frequently run out of memory if the #of cores on an executor is too
high, because each core launches a new parquet decompressor thread, which
allocates memory off heap to decompress. Consequently, even with say 12
cores on an executor, depending on the memory, I can only use 2-3 to avoid
OOMs when reading parquet files.

Ideally I would want to use all 12 cores, but limit the # of parquet
decompresses to 2-3 per executor. Is there some way to do this?

Thanks,
Ankit



--
View this message in context: http://apache-spark-user-list.1001560.n3.nabble.com/Limit-of-parallel-parquet-decompresses-tp22022.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.

---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscribe@spark.apache.org
For additional commands, e-mail: user-help@spark.apache.org


Mime
View raw message