spark-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Manoj Samel <>
Subject Error when running SparkPi on Secure HA Hadoop cluster
Date Thu, 15 Jan 2015 19:23:10 GMT

Setup is as follows

Hadoop Cluster 2.3.0 (CDH5.0)
- Namenode HA
- Resource manager HA
- Secured with Kerberos

Spark 1.2

Run SparkPi as follows
- conf/spark-defaults.conf has following entries
spark.yarn.queue myqueue
spark.yarn.access.namenodes hdfs://namespace (remember this is namenode HA)
- Do kinit with some user keytab
- submit SparkPi as follows
spark-submit --class org.apache.spark.examples.SparkPi --master yarn-client
--num-executors 3 --driver-memory 4g --executor-memory 2g --executor-cores
1 --queue thequeue $MY_SPARK_DIR/lib/spark-examples*.jar 10

Gives following trace (not sure why it shows unknown queue when queue name
is specified in the spark-defaults.conf above.

15/01/15 19:18:27 INFO impl.YarnClientImpl: Submitted application
15/01/15 19:18:28 INFO yarn.Client: Application report for
application_1415648563285_31469 (state: FAILED)
15/01/15 19:18:28 INFO yarn.Client:
 client token: N/A
 diagnostics: Application application_1415648563285_31469 submitted by user
XYZ to unknown queue: thequeue <<--- WHY UNKNOWN QUEUE ???
 ApplicationMaster host: N/A
 ApplicationMaster RPC port: -1
 queue: thequeue   <<--- WHY UNKNOWN QUEUE ???
 start time: 1421349507652
 final status: FAILED
 tracking URL: N/A
 user: XYZ
Exception in thread "main" org.apache.spark.SparkException: Yarn
application has already ended! It might have been killed or unable to
launch application master.

View raw message