That seems like an HDP-specific issue. I did a quick search on "spark bad substitution" and all the results have to do with people failing to run YARN cluster in HDP. Here is a workaround that seems to have worked for multiple people.

I would not block the release on this particular issue. First, this doesn't seem like a Spark issue and second, even if it is, this only affects a small number of users and there is a workaround for it. In my own testing the `extraJavaOptions` are propagated correctly in both YARN client and cluster modes.

2015-12-17 12:36 GMT-08:00 Sebastian YEPES FERNANDEZ <syepes@gmail.com>:
@Andrew
Thanks for the reply, did you run this in a Hortonworks or Cloudera cluster?
I suspect the issue is coming from the ​extraJavaOptions as these are necessary in HDP, the strange thing is that with exactly the same settings 1.5 works.

# jar -tf spark-assembly-1.6.0-SNAPSHOT-hadoop2.7.1.jar | grep ApplicationMaster.class                                          
org/apache/spark/deploy/yarn/ApplicationMaster.class

----
Exit code: 1
Exception message: /hadoop/hdfs/disk02/hadoop/yarn/local/usercache/syepes/appcache/application_1445706872927_1593/container_e44_1445706872927_1593_02_000001/launch_container.sh: line 24: /usr/hdp/current/hadoop-client/lib/hadoop-lzo-0.6.0.2.3.2.0-2950.jar:$PWD:$PWD/__spark_conf__:$PWD/__spark__.jar:$HADOOP_CONF_DIR:/usr/hdp/current/hadoop-client/*:/usr/hdp/current/hadoop-client/lib/*:/usr/hdp/current/hadoop-hdfs-client/*:/usr/hdp/current/hadoop-hdfs-client/lib/*:/usr/hdp/current/hadoop-yarn-client/*:/usr/hdp/current/hadoop-yarn-client/lib/*:$PWD/mr-framework/hadoop/share/hadoop/mapreduce/*:$PWD/mr-framework/hadoop/share/hadoop/mapreduce/lib/*:$PWD/mr-framework/hadoop/share/hadoop/common/*:$PWD/mr-framework/hadoop/share/hadoop/common/lib/*:$PWD/mr-framework/hadoop/share/hadoop/yarn/*:$PWD/mr-framework/hadoop/share/hadoop/yarn/lib/*:$PWD/mr-framework/hadoop/share/hadoop/hdfs/*:$PWD/mr-framework/hadoop/share/hadoop/hdfs/lib/*:$PWD/mr-framework/hadoop/share/hadoop/tools/lib/*:/usr/hdp/${hdp.version}/hadoop/lib/hadoop-lzo-0.6.0.${hdp.version}.jar:/etc/hadoop/conf/secure: bad substitution
-----

​Regards,
 Sebastian

On Thu, Dec 17, 2015 at 9:14 PM, Andrew Or <andrew@databricks.com> wrote:
@syepes

I just run Spark 1.6 (881f254) on YARN with Hadoop 2.4.0. I was able to run a simple application in cluster mode successfully.

Can you verify whether the org.apache.spark.yarn.ApplicationMaster class exists in your assembly jar?

jar -tf assembly.jar | grep ApplicationMaster

-Andrew


2015-12-17 7:44 GMT-08:00 syepes <syepes@gmail.com>:
-1 (YARN Cluster deployment mode not working)

I have just tested 1.6 (d509194b) on our HDP 2.3 platform and the cluster
mode does not seem work. It looks like some parameter are not being passed
correctly.
This example works correctly with 1.5.

# spark-submit --master yarn --deploy-mode cluster --num-executors 1
--properties-file $PWD/spark-props.conf --class
org.apache.spark.examples.SparkPi
/opt/spark/lib/spark-examples-1.6.0-SNAPSHOT-hadoop2.7.1.jar

Error: Could not find or load main class
org.apache.spark.deploy.yarn.ApplicationMaster

spark-props.conf
-----------------------------
spark.driver.
​​
extraJavaOptions                -Dhdp.version=2.3.2.0-2950
spark.driver.extraLibraryPath
/usr/hdp/current/hadoop-client/lib/native:/usr/hdp/current/hadoop-client/lib/native/Linux-amd64-64
spark.executor.extraJavaOptions              -Dhdp.version=2.3.2.0-2950
spark.executor.extraLibraryPath
/usr/hdp/current/hadoop-client/lib/native:/usr/hdp/current/hadoop-client/lib/native/Linux-amd64-64
-----------------------------

I will try to do some more debugging on this issue.





--
View this message in context: http://apache-spark-developers-list.1001551.n3.nabble.com/VOTE-Release-Apache-Spark-1-6-0-RC3-tp15660p15692.html
Sent from the Apache Spark Developers List mailing list archive at Nabble.com.

---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@spark.apache.org
For additional commands, e-mail: dev-help@spark.apache.org