spark-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From jamborta <jambo...@gmail.com>
Subject spark context not picking up default hadoop filesystem
Date Mon, 26 Jan 2015 14:07:24 GMT
hi all,

I am trying to create a spark context programmatically, using
org.apache.spark.deploy.SparkSubmit. It all looks OK, except that the hadoop
config that is created during the process is not picking up core-site.xml,
so it defaults back to the local file-system. I have set HADOOP_CONF_DIR in
spark-env.sh, also core-site.xml in in the conf folder. The whole thing
works if it is executed through spark shell.

Just wondering where spark is picking up the hadoop config path from?

many thanks,



--
View this message in context: http://apache-spark-user-list.1001560.n3.nabble.com/spark-context-not-picking-up-default-hadoop-filesystem-tp21368.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.

---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscribe@spark.apache.org
For additional commands, e-mail: user-help@spark.apache.org


Mime
View raw message