spark-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Tom Graves <tgraves...@yahoo.com>
Subject Re: Fail to run on yarn with release version?
Date Fri, 16 Aug 2013 13:20:26 GMT
Its looks like a config issue. Do you have HADOOP_CONF_DIR and HADOOP_PREFIX set and pointing
to the proper install/config locations for your cluster?

Tom


________________________________
 From: "Liu, Raymond" <raymond.liu@intel.com>
To: "user@spark.incubator.apache.org" <user@spark.incubator.apache.org> 
Sent: Friday, August 16, 2013 2:46 AM
Subject: Fail to run on yarn with release version?
 

Hi

    I could run spark trunk code on top of yarn 2.0.5-alpha by 

SPARK_JAR=./core/target/spark-core-assembly-0.8.0-SNAPSHOT.jar ./run spark.deploy.yarn.Client
\
  --jar examples/target/scala-2.9.3/spark-examples_2.9.3-0.8.0-SNAPSHOT.jar \
  --class spark.examples.SparkPi \
  --args yarn-standalone \
  --num-workers 3 \
  --worker-memory 2g \
  --worker-cores 2


While, if I use make-distribution.sh to build a release package and use this package on the
cluster. Then it fails to run up. I do copy examples jar to jars/ dir. 
The other mode say standalone/mesos/local runs well with the release package.

The error encounter is :

Exception in thread "main" java.io.IOException: No FileSystem for scheme: hdfs
        at org.apache.hadoop.fs.FileSystem.getFileSystemClass(FileSystem.java:2265)
        at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2272)
        at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:86)
        at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2311)
        at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2293)
        at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:317)
        at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:163)
        at spark.deploy.yarn.Client.prepareLocalResources(Client.scala:117)
        at spark.deploy.yarn.Client.run(Client.scala:59)
        at spark.deploy.yarn.Client$.main(Client.scala:318)
        at spark.deploy.yarn.Client.main(Client.scala)


google result seems leading to hdfs core-default.xml not included in the fat jar. While I
checked that it did.
Any idea on this issue? Thanks!


Best Regards,
Raymond Liu
Mime
View raw message