spark-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Mayur Rustagi <>
Subject Re: Setting properties in core-site.xml for Spark and Hadoop to access
Date Fri, 07 Mar 2014 19:07:23 GMT
Set them as environment variable at boot & configure both stacks to call on

Mayur Rustagi
Ph: +1 (760) 203 3257
@mayur_rustagi <>

On Fri, Mar 7, 2014 at 9:32 AM, Nicholas Chammas <
> wrote:

> On spinning up a Spark cluster in EC2, I'd like to set a few configs that
> will allow me to access files in S3 without having to specify my AWS access
> and secret keys over and over, as described here<>
> .
> The properties are fs.s3.awsAccessKeyId and fs.s3.awsSecretAccessKey.
> Is there a way to set these properties programmatically so that Spark (via
> the shell) and Hadoop (via distcp) are both aware of and use the values?
> I don't think SparkConf does what I need because I want Hadoop to also be
> aware of my AWS keys. When I set those properties using conf.set() in
> pyspark, distcp didn't appear to be aware of them.
> Nick
> ------------------------------
> View this message in context: Setting properties in core-site.xml for
> Spark and Hadoop to access<>
> Sent from the Apache Spark User List mailing list archive<>at

View raw message