spark-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Scott Clasen <scott.cla...@gmail.com>
Subject Re: Spark Streaming + Kafka + Mesos/Marathon strangeness
Date Thu, 27 Mar 2014 01:19:27 GMT
The web-ui shows 3 executors, the driver and one spark task on each worker.

I do see that there were 8 successful tasks and the ninth failed like so...

java.lang.Exception (java.lang.Exception: Could not compute split, block
input-0-1395860790200 not found)
org.apache.spark.rdd.BlockRDD.compute(BlockRDD.scala:45)
org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:241)
org.apache.spark.rdd.RDD.iterator(RDD.scala:232)
org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:109)
org.apache.spark.scheduler.Task.run(Task.scala:53)
org.apache.spark.executor.Executor$TaskRunner$$anonfun$run$1.apply$mcV$sp(Executor.scala:213)
org.apache.spark.deploy.SparkHadoopUtil.runAsUser(SparkHadoopUtil.scala:49)
org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:178)
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1146)
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
java.lang.Thread.run(Thread.java:701)

Why would that happen? The two tasks that are running are ones that never
successfully received messages from kafka, whereas the one that did was
killed for some reason after working fine for a few minutes.

Thanks!





--
View this message in context: http://apache-spark-user-list.1001560.n3.nabble.com/Spark-Streaming-Kafka-Mesos-Marathon-strangeness-tp3285p3312.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.

Mime
View raw message