> 15/08/11 12:59:34 WARN scheduler.TaskSetManager: Lost task 0.0 in stage 3.0 (TID 71, sdldalplhdw02.suddenlink.cequel3.com): java.lang.NullPointerException
	at com.suddenlink.pnm.process.HBaseStoreHelper.flush(HBaseStoreHelper.java:313)

It's your app error. NPE from HBaseStoreHelper



On Wed, Aug 12, 2015 at 5:12 AM, Nikhil Gs <gsnikhil1432010@gmail.com> wrote:
Hello Team,

I am facing an error which I have pasted below. My job is failing when I am copying my data files into flume spool directory. Most of the time the job is getting failed. Dont know why..

Facing this issue several times. Also, for your reference I have attached the complete Yarn log file. Please suggest me whats the issue.

Thanks in advance.

15/08/11 12:59:30 INFO storage.BlockManagerInfo: Added broadcast_3_piece0 in memory on sdldalplhdw02.suddenlink.cequel3.com:35668 (size: 2.1 KB, free: 1059.7 MB)
15/08/11 12:59:31 INFO storage.BlockManagerInfo: Added rdd_5_0 in memory on sdldalplhdw02.suddenlink.cequel3.com:35668 (size: 1693.6 KB, free: 1058.0 MB)
15/08/11 12:59:32 INFO storage.BlockManagerInfo: Added rdd_7_0 in memory on sdldalplhdw02.suddenlink.cequel3.com:35668 (size: 1697.6 KB, free: 1056.4 MB)
15/08/11 12:59:34 WARN scheduler.TaskSetManager: Lost task 0.0 in stage 3.0 (TID 71, sdldalplhdw02.suddenlink.cequel3.com): java.lang.NullPointerException
	at com.suddenlink.pnm.process.HBaseStoreHelper.flush(HBaseStoreHelper.java:313)
	at com.suddenlink.pnm.process.StoreNodeInHBase$1.call(StoreNodeInHBase.java:57)
	at com.suddenlink.pnm.process.StoreNodeInHBase$1.call(StoreNodeInHBase.java:31)
	at org.apache.spark.api.java.JavaRDDLike$$anonfun$foreach$1.apply(JavaRDDLike.scala:304)
	at org.apache.spark.api.java.JavaRDDLike$$anonfun$foreach$1.apply(JavaRDDLike.scala:304)
	at scala.collection.Iterator$class.foreach(Iterator.scala:727)
	at org.apache.spark.InterruptibleIterator.foreach(InterruptibleIterator.scala:28)
	at org.apache.spark.rdd.RDD$$anonfun$foreach$1.apply(RDD.scala:798)
	at org.apache.spark.rdd.RDD$$anonfun$foreach$1.apply(RDD.scala:798)
	at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:1503)
	at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:1503)
	at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:61)
	at org.apache.spark.scheduler.Task.run(Task.scala:64)
	at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:203)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
	at java.lang.Thread.run(Thread.java:745)

15/08/11 12:59:34 INFO scheduler.TaskSetManager: Starting task 0.1 in stage 3.0 (TID 72, sdldalplhdw02.suddenlink.cequel3.com, NODE_LOCAL, 1179 bytes)
15/08/11 12:59:34 INFO scheduler.TaskSetManager: Lost task 0.1 in stage 3.0 (TID 72) on executor sdldalplhdw02.suddenlink.cequel3.com: java.lang.NullPointerException (null) [duplicate 1]
15/08/11 12:59:34 INFO scheduler.TaskSetManager: Starting task 0.2 in stage 3.0 (TID 73, sdldalplhdw02.suddenlink.cequel3.com, NODE_LOCAL, 1179 bytes)
15/08/11 12:59:34 INFO scheduler.TaskSetManager: Lost task 0.2 in stage 3.0 (TID 73) on executor sdldalplhdw02.suddenlink.cequel3.com: java.lang.NullPointerException (null) [duplicate 2]
15/08/11 12:59:34 INFO scheduler.TaskSetManager: Starting task 0.3 in stage 3.0 (TID 74, sdldalplhdw02.suddenlink.cequel3.com, NODE_LOCAL, 1179 bytes)
15/08/11 12:59:34 INFO scheduler.TaskSetManager: Lost task 0.3 in stage 3.0 (TID 74) on executor sdldalplhdw02.suddenlink.cequel3.com: java.lang.NullPointerException (null) [duplicate 3]
15/08/11 12:59:34 ERROR scheduler.TaskSetManager: Task 0 in stage 3.0 failed 4 times; aborting job
15/08/11 12:59:34 INFO cluster.YarnClusterScheduler: Removed TaskSet 3.0, whose tasks have all completed, from pool 
15/08/11 12:59:34 INFO cluster.YarnClusterScheduler: Cancelling stage 3
15/08/11 12:59:34 INFO scheduler.DAGScheduler: Job 2 failed: foreachRDD at NodeProcessor.java:101, took 4.750491 s
15/08/11 12:59:34 ERROR scheduler.JobScheduler: Error running job streaming job 1439315970000 ms.0
org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 3.0 failed 4 times, most recent failure: Lost task 0.3 in stage 3.0 (TID 74, sdldalplhdw02.suddenlink.cequel3.com): java.lang.NullPointerException
Regards,
Nik.



---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscribe@spark.apache.org
For additional commands, e-mail: user-help@spark.apache.org



--
Best Regards

Jeff Zhang