spark-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Jiang Jacky <jiang0...@gmail.com>
Subject strange behavior of spark 2.1.0
Date Sat, 01 Apr 2017 20:14:30 GMT
Hello, Guys
I am running the spark streaming in 2.1.0, the scala version is tried on
2.11.7 and 2.11.4. And it is consuming from JMS. Recently, I have get the
following error
*"ERROR scheduler.ReceiverTracker: Deregistered receiver for stream 0:
Stopped by driver"*

*This error can be occurred randomly, it might be couple hours or couple
days. besides this error, everything is perfect.*
When the error happens, my job is stopped completely. There is no any other
error can be found.
I am running on top of yarn, and tried to look up the error through yarn
logs, container, no any further information appears there. The job is just
stopped from driver gracefully. BTW I have customized receiver, I either do
not think it is happened from receiver, there is no any error exception
from receiver, and I can also track the stop command is sent from "onStop"
function in receiver.

FYI, the driver is not consuming any large memory, there is no any RDD
"collect" command in the driver. I have also checked container log for each
executor, and cannot find any further error.




The following is my conf for the spark context
val conf = new SparkConf().setAppName(jobName).setMaster(master)
  .set("spark.hadoop.validateOutputSpecs", "false")
  .set("spark.driver.allowMultipleContexts", "true")
  .set("spark.streaming.receiver.maxRate", "500")
  .set("spark.streaming.backpressure.enabled", "true")
  .set("spark.streaming.stopGracefullyOnShutdown", "true")
  .set("spark.eventLog.enabled", "true");

If you have any idea or suggestion, please let me know. Appreciate on the
solution.

Thank you so much

Mime
View raw message