spark-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Dmitry Goldenberg <dgoldenb...@hexastax.com>
Subject How to fix error "Failed to get records for..." after polling for 120000
Date Tue, 18 Apr 2017 22:22:09 GMT
Hi,

I was wondering if folks have some ideas, recommendation for how to fix
this error (full stack trace included below).

We're on Kafka 0.10.0.0 and spark_streaming_2.11 v. 2.0.0.

We've tried a few things as suggested in these sources:

   -
   http://stackoverflow.com/questions/42264669/spark-streaming-assertion-failed-failed-to-get-records-for-spark-executor-a-gro
   - https://issues.apache.org/jira/browse/SPARK-19275
   - https://issues.apache.org/jira/browse/SPARK-17147

but still seeing the error.

We'd appreciate any clues or recommendations.
Thanks,
- Dmitry

------------------------------------------------------------------------------------------------------------

Exception in thread "main" org.apache.spark.SparkException: Job aborted due
to stage failure: Task 3 in stage 4227.0 failed 1 times, most recent
failure: Lost task 3.0 in stage 4227.0 (TID 33819, localhost):
java.lang.AssertionError: assertion failed: Failed to get records for
spark-executor-Group-Consumer-Group-1 Topic1 0 476289 after polling for
120000

        at scala.Predef$.assert(Predef.scala:170)

        at
org.apache.spark.streaming.kafka010.CachedKafkaConsumer.get(CachedKafkaConsumer.scala:74)

        at
org.apache.spark.streaming.kafka010.KafkaRDD$KafkaRDDIterator.next(KafkaRDD.scala:227)

        at
org.apache.spark.streaming.kafka010.KafkaRDD$KafkaRDDIterator.next(KafkaRDD.scala:193)

        at scala.collection.Iterator$$anon$11.next(Iterator.scala:409)

        at
scala.collection.convert.Wrappers$IteratorWrapper.next(Wrappers.scala:31)

        at
com.myco.ProcessPartitionFunction.call(ProcessPartitionFunction.java:70)

        at
com.myco.ProcessPartitionFunction.call(ProcessPartitionFunction.java:24)

        at
org.apache.spark.api.java.JavaRDDLike$$anonfun$foreachPartition$1.apply(JavaRDDLike.scala:218)

        at
org.apache.spark.api.java.JavaRDDLike$$anonfun$foreachPartition$1.apply(JavaRDDLike.scala:218)

        at
org.apache.spark.rdd.RDD$$anonfun$foreachPartition$1$$anonfun$apply$28.apply(RDD.scala:902)

        at
org.apache.spark.rdd.RDD$$anonfun$foreachPartition$1$$anonfun$apply$28.apply(RDD.scala:902)

        at
org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:1916)

        at
org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:1916)

        at
org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:70)

        at org.apache.spark.scheduler.Task.run(Task.scala:86)

        at
org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:274)

        at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)

        at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)

        at java.lang.Thread.run(Thread.java:745)



Driver stacktrace:

        at org.apache.spark.scheduler.DAGScheduler.org
<http://org.apache.spark.scheduler.dagscheduler.org/>
$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1454)

        at
org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1442)

        at
org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1441)

        at
scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)

        at
scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)

        at
org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1441)

        at
org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:811)

        at
org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:811)

        at scala.Option.foreach(Option.scala:257)

        at
org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:811)

        at
org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:1667)

        at
org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1622)

        at
org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1611)

        at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:48)

        at
org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:632)

        at org.apache.spark.SparkContext.runJob(SparkContext.scala:1890)

        at org.apache.spark.SparkContext.runJob(SparkContext.scala:1903)

        at org.apache.spark.SparkContext.runJob(SparkContext.scala:1916)

        at org.apache.spark.SparkContext.runJob(SparkContext.scala:1930)

        at
org.apache.spark.rdd.RDD$$anonfun$foreachPartition$1.apply(RDD.scala:902)

        at
org.apache.spark.rdd.RDD$$anonfun$foreachPartition$1.apply(RDD.scala:900)

        at
org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)

        at
org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)

        at org.apache.spark.rdd.RDD.withScope(RDD.scala:358)

        at org.apache.spark.rdd.RDD.foreachPartition(RDD.scala:900)

        at
org.apache.spark.api.java.JavaRDDLike$class.foreachPartition(JavaRDDLike.scala:218)

        at
org.apache.spark.api.java.AbstractJavaRDDLike.foreachPartition(JavaRDDLike.scala:45)

        at
com.myco.KafkaSparkStreamingDriver$3.call(KafkaSparkStreamingDriver.java:215)

        at
com.myco.KafkaSparkStreamingDriver$3.call(KafkaSparkStreamingDriver.java:202)

        at
org.apache.spark.streaming.api.java.JavaDStreamLike$$anonfun$foreachRDD$1.apply(JavaDStreamLike.scala:272)

        at
org.apache.spark.streaming.api.java.JavaDStreamLike$$anonfun$foreachRDD$1.apply(JavaDStreamLike.scala:272)

        at
org.apache.spark.streaming.dstream.DStream$$anonfun$foreachRDD$1$$anonfun$apply$mcV$sp$3.apply(DStream.scala:627)

        at
org.apache.spark.streaming.dstream.DStream$$anonfun$foreachRDD$1$$anonfun$apply$mcV$sp$3.apply(DStream.scala:627)

        at
org.apache.spark.streaming.dstream.ForEachDStream$$anonfun$1$$anonfun$apply$mcV$sp$1.apply$mcV$sp(ForEachDStream.scala:51)

        at
org.apache.spark.streaming.dstream.ForEachDStream$$anonfun$1$$anonfun$apply$mcV$sp$1.apply(ForEachDStream.scala:51)

        at
org.apache.spark.streaming.dstream.ForEachDStream$$anonfun$1$$anonfun$apply$mcV$sp$1.apply(ForEachDStream.scala:51)

        at
org.apache.spark.streaming.dstream.DStream.createRDDWithLocalProperties(DStream.scala:415)

        at
org.apache.spark.streaming.dstream.ForEachDStream$$anonfun$1.apply$mcV$sp(ForEachDStream.scala:50)

        at
org.apache.spark.streaming.dstream.ForEachDStream$$anonfun$1.apply(ForEachDStream.scala:50)

        at
org.apache.spark.streaming.dstream.ForEachDStream$$anonfun$1.apply(ForEachDStream.scala:50)

        at scala.util.Try$.apply(Try.scala:192)

        at org.apache.spark.streaming.scheduler.Job.run(Job.scala:39)

        at
org.apache.spark.streaming.scheduler.JobScheduler$JobHandler$$anonfun$run$1.apply$mcV$sp(JobScheduler.scala:247)

        at
org.apache.spark.streaming.scheduler.JobScheduler$JobHandler$$anonfun$run$1.apply(JobScheduler.scala:247)

        at
org.apache.spark.streaming.scheduler.JobScheduler$JobHandler$$anonfun$run$1.apply(JobScheduler.scala:247)

        at scala.util.DynamicVariable.withValue(DynamicVariable.scala:58)

        at
org.apache.spark.streaming.scheduler.JobScheduler$JobHandler.run(JobScheduler.scala:246)

        at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)

        at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)

        at java.lang.Thread.run(Thread.java:745)

Caused by: java.lang.AssertionError: assertion failed: Failed to get
records for spark-executor-Group-Consumer-Group1 Topic1 0 476289 after
polling for 120000

        at scala.Predef$.assert(Predef.scala:170)

        at
org.apache.spark.streaming.kafka010.CachedKafkaConsumer.get(CachedKafkaConsumer.scala:74)

        at
org.apache.spark.streaming.kafka010.KafkaRDD$KafkaRDDIterator.next(KafkaRDD.scala:227)

        at
org.apache.spark.streaming.kafka010.KafkaRDD$KafkaRDDIterator.next(KafkaRDD.scala:193)

        at scala.collection.Iterator$$anon$11.next(Iterator.scala:409)

        at
scala.collection.convert.Wrappers$IteratorWrapper.next(Wrappers.scala:31)

        at
com.myco.ProcessPartitionFunction.call(ProcessPartitionFunction.java:70)

        at
com.myco.ProcessPartitionFunction.call(ProcessPartitionFunction.java:24)

        at
org.apache.spark.api.java.JavaRDDLike$$anonfun$foreachPartition$1.apply(JavaRDDLike.scala:218)

        at
org.apache.spark.api.java.JavaRDDLike$$anonfun$foreachPartition$1.apply(JavaRDDLike.scala:218)

        at
org.apache.spark.rdd.RDD$$anonfun$foreachPartition$1$$anonfun$apply$28.apply(RDD.scala:902)

        at
org.apache.spark.rdd.RDD$$anonfun$foreachPartition$1$$anonfun$apply$28.apply(RDD.scala:902)

        at
org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:1916)

        at
org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:1916)

        at
org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:70)

        at org.apache.spark.scheduler.Task.run(Task.scala:86)

        at
org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:274)

        ... 3 more

Mime
View raw message