spark-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Tom Graves <>
Subject Re: Fail to run on yarn with release version?
Date Fri, 16 Aug 2013 13:20:26 GMT
Its looks like a config issue. Do you have HADOOP_CONF_DIR and HADOOP_PREFIX set and pointing
to the proper install/config locations for your cluster?


 From: "Liu, Raymond" <>
To: "" <> 
Sent: Friday, August 16, 2013 2:46 AM
Subject: Fail to run on yarn with release version?


    I could run spark trunk code on top of yarn 2.0.5-alpha by 

SPARK_JAR=./core/target/spark-core-assembly-0.8.0-SNAPSHOT.jar ./run spark.deploy.yarn.Client
  --jar examples/target/scala-2.9.3/spark-examples_2.9.3-0.8.0-SNAPSHOT.jar \
  --class spark.examples.SparkPi \
  --args yarn-standalone \
  --num-workers 3 \
  --worker-memory 2g \
  --worker-cores 2

While, if I use to build a release package and use this package on the
cluster. Then it fails to run up. I do copy examples jar to jars/ dir. 
The other mode say standalone/mesos/local runs well with the release package.

The error encounter is :

Exception in thread "main" No FileSystem for scheme: hdfs
        at org.apache.hadoop.fs.FileSystem.getFileSystemClass(
        at org.apache.hadoop.fs.FileSystem.createFileSystem(
        at org.apache.hadoop.fs.FileSystem.access$200(
        at org.apache.hadoop.fs.FileSystem$Cache.getInternal(
        at org.apache.hadoop.fs.FileSystem$Cache.get(
        at org.apache.hadoop.fs.FileSystem.get(
        at org.apache.hadoop.fs.FileSystem.get(
        at spark.deploy.yarn.Client.prepareLocalResources(Client.scala:117)
        at spark.deploy.yarn.Client$.main(Client.scala:318)
        at spark.deploy.yarn.Client.main(Client.scala)

google result seems leading to hdfs core-default.xml not included in the fat jar. While I
checked that it did.
Any idea on this issue? Thanks!

Best Regards,
Raymond Liu
View raw message