spark-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Jacek Laskowski <>
Subject Re: Is sourced by Application Master and Executor for Spark on YARN?
Date Wed, 03 Jan 2018 13:46:30 GMT

My understanding is that AM with the driver (in cluster deploy mode) and
executors are simple Java processes with their settings set one by one
while submitting a Spark application for execution and creating
ContainerLaunchContext for launching YARN containers. See
for the code that does the settings to properties mapping.

With that I think conf/spark-defaults.conf won't be loaded by itself.

Why don't you set a property and see if it's available on the driver in
cluster deploy mode? That should give you a definitive answer (or at least
get you closer).

Jacek Laskowski
Mastering Spark SQL
Spark Structured Streaming
Mastering Kafka Streams
Follow me at

On Wed, Jan 3, 2018 at 7:57 AM, John Zhuge <> wrote:

> Hi,
> I am running Spark 2.0.0 and 2.1.1 on YARN in a Hadoop 2.7.3 cluster. Is
> sourced when starting the Spark AM container or the executor
> container?
> Saw this paragraph on
> Note: When running Spark on YARN in cluster mode, environment variables
>> need to be set using the spark.yarn.appMasterEnv.[
>> EnvironmentVariableName] property in your conf/spark-defaults.conf file.
>> Environment variables that are set in will not be reflected
>> in the YARN Application Master process in clustermode. See the YARN-related
>> Spark Properties
>> <>
>> more information.
> Does it mean will not be sourced when starting AM in cluster
> mode?
> Does this paragraph appy to executor as well?
> Thanks,
> --
> John Zhuge

View raw message