spark-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Dong Lei <>
Subject Driver crash at the end with InvocationTargetException when running SparkPi
Date Mon, 08 Jun 2015 03:31:06 GMT
Hi spark users:

After I submitted a SparkPi job to spark, the driver crashed at the end of the job with the
following log:

WARN EventLoggingListener: Event log dir file:/d:/data/SparkWorker/work/driver-20150607200517-0002/logs/event
does not exists, will newly create one.
Exception in thread "main" java.lang.reflect.InvocationTargetException
                at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
                at sun.reflect.NativeMethodAccessorImpl.invoke(
                at sun.reflect.DelegatingMethodAccessorImpl.invoke(
                at java.lang.reflect.Method.invoke(
                at org.apache.spark.deploy.worker.DriverWrapper$.main(DriverWrapper.scala:59)
                at org.apache.spark.deploy.worker.DriverWrapper.main(DriverWrapper.scala)
Caused by: java.lang.NullPointerException
                at java.lang.ProcessBuilder.start(
                at org.apache.hadoop.util.Shell.runCommand(
                at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(
                at org.apache.hadoop.util.Shell.execCommand(
                at org.apache.hadoop.util.Shell.execCommand(
                at org.apache.hadoop.fs.RawLocalFileSystem.setPermission(
                at org.apache.hadoop.fs.FilterFileSystem.setPermission(
                at org.apache.spark.scheduler.EventLoggingListener.start(EventLoggingListener.scala:135)
                at org.apache.spark.SparkContext.<init>(SparkContext.scala:401)
                at org.apache.spark.examples.SparkPi$.main(SparkPi.scala:28)
                at org.apache.spark.examples.SparkPi.main(SparkPi.scala)

>From the log, I can see that the driver has added jars from HDFS, connected to master,
scheduled executors and all the executors were running. And then this error occurred.

The command I use to submit job(I'm running spark 1.3.1 with standalone mode on windows):
./bin/spark-submit \
  --class org.apache.spark.examples.SparkPi \
  --master spark://localhost:7077 \
  --deploy-mode cluster
Hdfs://localhost:443/spark-examples-1.3.1-hadoop2.4.0.jar \

Any ideas about the error?
I've found a similar error in JIRA but It
only occurred at FileLogger when using yarn and eventlog set to HDFS. In my case, I use standalone
mode and event log set to local, and my error is caused by Hadoop.util.Shell.runCommand.

Best Regards
Dong Lei

View raw message