That seems like an HDP-specific issue. I did a quick search on "spark bad substitution" and all the results have to do with people failing to run YARN cluster in HDP. Here is a workaround that seems to have worked for multiple people.

I would not block the release on this particular issue. First, this doesn't seem like a Spark issue and second, even if it is, this only affects a small number of users and there is a workaround for it. In my own testing the `extraJavaOptions` are propagated correctly in both YARN client and cluster modes.

2015-12-17 12:36 GMT-08:00 Sebastian YEPES FERNANDEZ <>:
Thanks for the reply, did you run this in a Hortonworks or Cloudera cluster?
I suspect the issue is coming from the ​extraJavaOptions as these are necessary in HDP, the strange thing is that with exactly the same settings 1.5 works.

# jar -tf spark-assembly-1.6.0-SNAPSHOT-hadoop2.7.1.jar | grep ApplicationMaster.class                                          

Exit code: 1
Exception message: /hadoop/hdfs/disk02/hadoop/yarn/local/usercache/syepes/appcache/application_1445706872927_1593/container_e44_1445706872927_1593_02_000001/ line 24: /usr/hdp/current/hadoop-client/lib/hadoop-lzo-$PWD:$PWD/__spark_conf__:$PWD/__spark__.jar:$HADOOP_CONF_DIR:/usr/hdp/current/hadoop-client/*:/usr/hdp/current/hadoop-client/lib/*:/usr/hdp/current/hadoop-hdfs-client/*:/usr/hdp/current/hadoop-hdfs-client/lib/*:/usr/hdp/current/hadoop-yarn-client/*:/usr/hdp/current/hadoop-yarn-client/lib/*:$PWD/mr-framework/hadoop/share/hadoop/mapreduce/*:$PWD/mr-framework/hadoop/share/hadoop/mapreduce/lib/*:$PWD/mr-framework/hadoop/share/hadoop/common/*:$PWD/mr-framework/hadoop/share/hadoop/common/lib/*:$PWD/mr-framework/hadoop/share/hadoop/yarn/*:$PWD/mr-framework/hadoop/share/hadoop/yarn/lib/*:$PWD/mr-framework/hadoop/share/hadoop/hdfs/*:$PWD/mr-framework/hadoop/share/hadoop/hdfs/lib/*:$PWD/mr-framework/hadoop/share/hadoop/tools/lib/*:/usr/hdp/${hdp.version}/hadoop/lib/hadoop-lzo-0.6.0.${hdp.version}.jar:/etc/hadoop/conf/secure: bad substitution


On Thu, Dec 17, 2015 at 9:14 PM, Andrew Or <> wrote:

I just run Spark 1.6 (881f254) on YARN with Hadoop 2.4.0. I was able to run a simple application in cluster mode successfully.

Can you verify whether the org.apache.spark.yarn.ApplicationMaster class exists in your assembly jar?

jar -tf assembly.jar | grep ApplicationMaster


2015-12-17 7:44 GMT-08:00 syepes <>:
-1 (YARN Cluster deployment mode not working)

I have just tested 1.6 (d509194b) on our HDP 2.3 platform and the cluster
mode does not seem work. It looks like some parameter are not being passed
This example works correctly with 1.5.

# spark-submit --master yarn --deploy-mode cluster --num-executors 1
--properties-file $PWD/spark-props.conf --class

Error: Could not find or load main class

extraJavaOptions                -Dhdp.version=
spark.executor.extraJavaOptions              -Dhdp.version=

I will try to do some more debugging on this issue.

View this message in context:
Sent from the Apache Spark Developers List mailing list archive at

To unsubscribe, e-mail:
For additional commands, e-mail: