spark-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Akhil Das <ak...@sigmoidanalytics.com>
Subject Re: Master getting down with Memory issue.
Date Mon, 28 Sep 2015 11:47:47 GMT
Depends on the data volume that you are operating on.

Thanks
Best Regards

On Mon, Sep 28, 2015 at 5:12 PM, Saurav Sinha <sauravsinha76@gmail.com>
wrote:

> Hi Akhil,
>
> My job is creating 47 stages in one cycle and it is running every hour.
> Can you please suggest me what is optimum numbers of stages in spark job.
>
> How can we reduce numbers of stages in spark job.
>
> Thanks,
> Saurav Sinha
>
> On Mon, Sep 28, 2015 at 3:23 PM, Saurav Sinha <sauravsinha76@gmail.com>
> wrote:
>
>> Hi Akhil,
>>
>> Can you please explaine to me how increasing number of partition (which
>> is thing is worker nodes) will help.
>>
>> As issue is that my master is getting OOM.
>>
>> Thanks,
>> Saurav Sinha
>>
>> On Mon, Sep 28, 2015 at 2:32 PM, Akhil Das <akhil@sigmoidanalytics.com>
>> wrote:
>>
>>> This behavior totally depends on the job that you are doing. Usually
>>> increasing the # of partitions will sort out this issue. It would be good
>>> if you can paste the code snippet or explain what type of operations that
>>> you are doing.
>>>
>>> Thanks
>>> Best Regards
>>>
>>> On Mon, Sep 28, 2015 at 11:37 AM, Saurav Sinha <sauravsinha76@gmail.com>
>>> wrote:
>>>
>>>> Hi Spark Users,
>>>>
>>>> I am running some spark jobs which is running every hour.After running
>>>> for 12 hours master is getting killed giving exception as
>>>>
>>>> *java.lang.OutOfMemoryError: GC overhead limit exceeded*
>>>>
>>>> It look like there is some memory issue in spark master.
>>>> Spark Master is blocker. Any one please suggest me any thing.
>>>>
>>>>
>>>> Same kind of issue I noticed with spark history server.
>>>>
>>>> In my job I have to monitor if job completed successfully, for that I
>>>> am hitting curl to get status but when no of jobs has increased to >80
apps
>>>> history server start responding with delay.Like it is taking more then 5
>>>> min to respond status of jobs.
>>>>
>>>> Running spark 1.4.1 in standalone mode on 5 machine cluster.
>>>>
>>>> Kindly suggest me solution for memory issue it is blocker.
>>>>
>>>> Thanks,
>>>> Saurav Sinha
>>>>
>>>> --
>>>> Thanks and Regards,
>>>>
>>>> Saurav Sinha
>>>>
>>>> Contact: 9742879062
>>>>
>>>
>>>
>>
>>
>> --
>> Thanks and Regards,
>>
>> Saurav Sinha
>>
>> Contact: 9742879062
>>
>
>
>
> --
> Thanks and Regards,
>
> Saurav Sinha
>
> Contact: 9742879062
>

Mime
View raw message