Any updates on this?
On Wednesday 28 September 2016 12:22 PM, Sushrut Ikhar wrote:
Try with increasing the parallelism by repartitioning and also you may increase - spark.default.parallelismYou can also try with decreasing num-executor cores.Basically, this happens when the executor is using quite large memory than it asked; and yarn kills the executor.
On Wed, Sep 28, 2016 at 12:17 PM, Aditya <aditya.calangutkar@augmentiq.
I have a spark job which runs fine for small data. But when data increases it gives executor lost error.My executor and driver memory are set at its highest point. I have also tried increasing --conf spark.yarn.executor.memoryOver
head=600but still not able to fix the problem. Is there any other solution to fix the problem?