spark-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Jiang Jacky <jiang0...@gmail.com>
Subject Re: strange behavior of spark 2.1.0
Date Sun, 02 Apr 2017 12:27:35 GMT
Thank you for replying. 
Actually there is no any message coming during the exception. And there is no OOME in any
executor. What I am suspecting it might be caused by AWL. 

> On Apr 2, 2017, at 5:22 AM, Timur Shenkao <tsh@timshenkao.su> wrote:
> 
> Hello,
> It's difficult to tell without details.
> I believe one of the executors dies because of OOM or some Runtime Exception (some unforeseen
dirty data row).
> Less probable is GC stop-the-world pause when incoming message rate increases drastically.
> 
> 
>> On Saturday, April 1, 2017, Jiang Jacky <jiang01yi@gmail.com> wrote:
>> Hello, Guys
>> I am running the spark streaming in 2.1.0, the scala version is tried on 2.11.7 and
2.11.4. And it is consuming from JMS. Recently, I have get the following error
>> "ERROR scheduler.ReceiverTracker: Deregistered receiver for stream 0: Stopped by
driver"
>> 
>> This error can be occurred randomly, it might be couple hours or couple days. besides
this error, everything is perfect.
>> When the error happens, my job is stopped completely. There is no any other error
can be found.
>> I am running on top of yarn, and tried to look up the error through yarn logs, container,
no any further information appears there. The job is just stopped from driver gracefully.
BTW I have customized receiver, I either do not think it is happened from receiver, there
is no any error exception from receiver, and I can also track the stop command is sent from
"onStop" function in receiver.
>> 
>> FYI, the driver is not consuming any large memory, there is no any RDD "collect"
command in the driver. I have also checked container log for each executor, and cannot find
any further error.
>> 
>> 
>> 
>> 
>> The following is my conf for the spark context
>> val conf = new SparkConf().setAppName(jobName).setMaster(master)
>>   .set("spark.hadoop.validateOutputSpecs", "false")
>>   .set("spark.driver.allowMultipleContexts", "true")
>>   .set("spark.streaming.receiver.maxRate", "500")
>>   .set("spark.streaming.backpressure.enabled", "true")
>>   .set("spark.streaming.stopGracefullyOnShutdown", "true")
>>   .set("spark.eventLog.enabled", "true");
>> 
>> If you have any idea or suggestion, please let me know. Appreciate on the solution.
>> 
>> Thank you so much
>> 

Mime
View raw message