spark-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Kanwaldeep <kanwal...@gmail.com>
Subject Re: Problem with HBase external table on freshly created EMR cluster
Date Sat, 15 Mar 2014 06:39:01 GMT
I'm getting the same error on writing data to HBase cluster using SPark
Streaming.

Any suggestions on how to fix this?

2014-03-14 23:10:33,832 ERROR o.a.s.s.scheduler.JobScheduler  -
				Error running job streaming job 1394863830000 ms.0
org.apache.spark.SparkException: Job aborted: Task 9.0:0 failed 4 times
(most recent failure: Exception failure: java.lang.IllegalStateException:
unread block data)
	at
org.apache.spark.scheduler.DAGScheduler$$anonfun$org$apache$spark$scheduler$DAGScheduler$$abortStage$1.apply(DAGScheduler.scala:1028)
~[spark-core_2.10-0.9.0-incubating.jar:0.9.0-incubating]
	at
org.apache.spark.scheduler.DAGScheduler$$anonfun$org$apache$spark$scheduler$DAGScheduler$$abortStage$1.apply(DAGScheduler.scala:1026)
~[spark-core_2.10-0.9.0-incubating.jar:0.9.0-incubating]
	at
scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
~[scala-library-2.10.2.jar:na]
	at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47)
~[scala-library-2.10.2.jar:na]
	at
org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$abortStage(DAGScheduler.scala:1026)
~[spark-core_2.10-0.9.0-incubating.jar:0.9.0-incubating]
	at
org.apache.spark.scheduler.DAGScheduler$$anonfun$processEvent$10.apply(DAGScheduler.scala:619)
~[spark-core_2.10-0.9.0-incubating.jar:0.9.0-incubating]
	at
org.apache.spark.scheduler.DAGScheduler$$anonfun$processEvent$10.apply(DAGScheduler.scala:619)
~[spark-core_2.10-0.9.0-incubating.jar:0.9.0-incubating]
	at scala.Option.foreach(Option.scala:236) ~[scala-library-2.10.2.jar:na]
	at
org.apache.spark.scheduler.DAGScheduler.processEvent(DAGScheduler.scala:619)
~[spark-core_2.10-0.9.0-incubating.jar:0.9.0-incubating]
	at
org.apache.spark.scheduler.DAGScheduler$$anonfun$start$1$$anon$2$$anonfun$receive$1.applyOrElse(DAGScheduler.scala:207)
~[spark-core_2.10-0.9.0-incubating.jar:0.9.0-incubating]
	at akka.actor.ActorCell.receiveMessage(ActorCell.scala:498)
[akka-actor_2.10-2.2.3.jar:2.2.3]
	at akka.actor.ActorCell.invoke(ActorCell.scala:456)
[akka-actor_2.10-2.2.3.jar:2.2.3]
	at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:237)
[akka-actor_2.10-2.2.3.jar:2.2.3]
	at akka.dispatch.Mailbox.run(Mailbox.scala:219)
[akka-actor_2.10-2.2.3.jar:2.2.3]
	at
akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(AbstractDispatcher.scala:386)
[akka-actor_2.10-2.2.3.jar:2.2.3]
	at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
[scala-library-2.10.2.jar:na]
	at
scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
[scala-library-2.10.2.jar:na]
	at scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
[scala-library-2.10.2.jar:na]
	at
scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
[scala-library-2.10.2.jar:na]



--
View this message in context: http://apache-spark-user-list.1001560.n3.nabble.com/Problem-with-HBase-external-table-on-freshly-created-EMR-cluster-tp2307p2710.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.

Mime
View raw message