spark-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Chetan Khatri <>
Subject Reparitioning Hive tables - Container killed by YARN for exceeding memory limits
Date Wed, 02 Aug 2017 12:58:31 GMT
Hello Spark Users,

I have Hbase table reading and writing to Hive managed table where i
applied partitioning by date column which worked fine but it has generate
more number of files in almost 700 partitions but i wanted to use
reparation to reduce File I/O by reducing number of files inside each

*But i ended up with below exception:*

ExecutorLostFailure (executor 11 exited caused by one of the running tasks)
Reason: Container killed by YARN for exceeding memory limits. 14.0 GB of 14
GB physical memory used. Consider boosting spark.yarn.executor.

Driver memory=4g, executor mem=12g, num-executors=8, executor core=8

Do you think below setting can help me to overcome above issue:


Because default max number of partitions are 1000.

View raw message