spark-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Sean Owen (JIRA)" <j...@apache.org>
Subject [jira] [Resolved] (SPARK-980) NullPointerException for single-host setup with S3 URLs
Date Fri, 23 Jan 2015 12:07:34 GMT

     [ https://issues.apache.org/jira/browse/SPARK-980?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]

Sean Owen resolved SPARK-980.
-----------------------------
    Resolution: Fixed

> NullPointerException for single-host setup with S3 URLs
> -------------------------------------------------------
>
>                 Key: SPARK-980
>                 URL: https://issues.apache.org/jira/browse/SPARK-980
>             Project: Spark
>          Issue Type: Bug
>          Components: Input/Output
>    Affects Versions: 0.8.0
>            Reporter: Paul R. Brown
>
> Short version:
> * The use of {{execSparkHome_}} in [Worker.scala|https://github.com/apache/incubator-spark/blob/v0.8.0-incubating/core/src/main/scala/org/apache/spark/deploy/worker/Worker.scala#L135]
should be checked for {{null}} or that value should be defaulted or plumbed through.
> * If the {{sparkHome}} argument to {{new SparkContext(...)}} is non-optional, then it
should not be marked as optional.
> Long version:
> Starting up with {{bin/start-all.sh}} and then connecting from a Scala program and attempting
to read two S3 URLs results in the following trace in the worker log:
> {code}
> 13/12/03 21:50:23 ERROR worker.Worker:
> java.lang.NullPointerException
> 	at java.io.File.<init>(File.java:277)
> 	at org.apache.spark.deploy.worker.Worker$$anonfun$receive$1.apply(Worker.scala:135)
> 	at org.apache.spark.deploy.worker.Worker$$anonfun$receive$1.apply(Worker.scala:120)
> 	at akka.actor.Actor$class.apply(Actor.scala:318)
> 	at org.apache.spark.deploy.worker.Worker.apply(Worker.scala:39)
> 	at akka.actor.ActorCell.invoke(ActorCell.scala:626)
> 	at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:197)
> 	at akka.dispatch.Mailbox.run(Mailbox.scala:179)
> 	at akka.dispatch.ForkJoinExecutorConfigurator$MailboxExecutionTask.exec(AbstractDispatcher.scala:516)
> 	at akka.jsr166y.ForkJoinTask.doExec(ForkJoinTask.java:259)
> 	at akka.jsr166y.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:975)
> 	at akka.jsr166y.ForkJoinPool.runWorker(ForkJoinPool.java:1479)
> 	at akka.jsr166y.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:104)
> {code}
> This is on Mac OS X 10.9, Oracle Java 7u45, and the Hadoop 1 download from the incubator.
> Reading the code, this occurs because {{execSparkHome_}} is {{null}}; see [Worker.scala#L135|https://github.com/apache/incubator-spark/blob/v0.8.0-incubating/core/src/main/scala/org/apache/spark/deploy/worker/Worker.scala#L135],
and setting a value explicitly in the Scala driver allows the computation to complete.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org


Mime
View raw message