spark-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Akhil Das <ak...@sigmoidanalytics.com>
Subject Re: Null pointer exception with larger datasets
Date Tue, 18 Nov 2014 07:48:36 GMT
Make sure your list is not null, if that is null then its more like doing:

JavaRDD<Student> distData = sc.parallelize(*null*)

distData.foreach(println)



Thanks
Best Regards

On Tue, Nov 18, 2014 at 12:07 PM, Naveen Kumar Pokala <
npokala@spcapitaliq.com> wrote:

> Hi,
>
>
>
> I am having list Students and size is one Lakh and I am trying to save the
> file. It is throwing null pointer exception.
>
>
>
> JavaRDD<Student> distData = sc.parallelize(list);
>
>
>
> distData.saveAsTextFile("hdfs://master/data/spark/instruments.txt");
>
>
>
>
>
> 14/11/18 01:33:21 WARN scheduler.TaskSetManager: Lost task 5.0 in stage
> 0.0 (TID 5, master): java.lang.NullPointerException:
>
>
>         org.apache.spark.rdd.RDD$$anonfun$saveAsTextFile$1.apply(RDD.scala:1158)
>
>
> org.apache.spark.rdd.RDD$$anonfun$saveAsTextFile$1.apply(RDD.scala:1158)
>
>         scala.collection.Iterator$$anon$11.next(Iterator.scala:328)
>
>
> org.apache.spark.rdd.PairRDDFunctions$$anonfun$13.apply(PairRDDFunctions.scala:984)
>
>
> org.apache.spark.rdd.PairRDDFunctions$$anonfun$13.apply(PairRDDFunctions.scala:974)
>
>         org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:62)
>
>         org.apache.spark.scheduler.Task.run(Task.scala:54)
>
>
> org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:177)
>
>
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>
>
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>
>         java.lang.Thread.run(Thread.java:745)
>
>
>
>
>
> How to handle this?
>
>
>
> -Naveen
>

Mime
View raw message