spark-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Luke Rohde <rohde.l...@gmail.com>
Subject using amazon STS with spark
Date Mon, 02 May 2016 00:35:26 GMT
Hi - I'm using s3 storage with spark and would like to use AWS credentials
provided by STS to authenticate. I'm doing the following to use those
credentials:

val hadoopConf = sc.hadoopConfiguration
hadoopConf.set("fs.s3.awsAccessKeyId",credentials.getAccessKeyId)
hadoopConf.set("fs.s3.awsSecretAccessKey",credentials.getSecretAccessKey)

This is fine, but the credentials provided by the assume role API call are
temporary and expire after a maximum 1 hour lifetime. Does anyone have any
suggestions for jobs that would extend beyond a 1-hour duration and thus
would require resetting that credential config?

In general: does modifying a value in the SparkConf on the driver propagate
to executors after the executors start? I could imagine having a background
thread on the driver periodically refresh the credentials if so.

Thanks in advance.

Mime
View raw message