spark-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Aditya" <aditya.calangut...@augmentiq.co.in>
Subject Re: Spark Executor Lost issue
Date Wed, 28 Sep 2016 07:08:57 GMT
:
> Thanks Sushrut for the reply.
>
> Currently I have not defined spark.default.parallelism property.
> Can you let me know how much should I set it to?
>
>
> Regards,
> Aditya Calangutkar
>
> On Wednesday 28 September 2016 12:22 PM, Sushrut Ikhar wrote:
>> Try with increasing the parallelism by repartitioning and also you 
>> may increase - spark.default.parallelism
>> You can also try with decreasing num-executor cores.
>> Basically, this happens when the executor is using quite large memory 
>> than it asked; and yarn kills the executor.
>>
>> Regards,
>>
>> Sushrut Ikhar
>> https://about.me/sushrutikhar
>>
>> <https://about.me/sushrutikhar?promo=email_sig>
>>
>>
>> On Wed, Sep 28, 2016 at 12:17 PM, Aditya 
>> <aditya.calangutkar@augmentiq.co.in 
>> <mailto:aditya.calangutkar@augmentiq.co.in>> wrote:
>>
>>     I have a spark job which runs fine for small data. But when data
>>     increases it gives executor lost error.My executor and driver
>>     memory are set at its highest point. I have also tried
>>     increasing--conf spark.yarn.executor.memoryOverhead=600but still
>>     not able to fix the problem. Is there any other solution to fix
>>     the problem?
>>
>>
>
>




Mime
View raw message