[ https://issues.apache.org/jira/browse/HADOOP-9384?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14136386#comment-14136386
]
David S. Wang commented on HADOOP-9384:
---------------------------------------
+1 to [~stevel@apache.org]'s comment
> Update S3 native fs implementation to use AWS SDK to support authorization through roles
> ----------------------------------------------------------------------------------------
>
> Key: HADOOP-9384
> URL: https://issues.apache.org/jira/browse/HADOOP-9384
> Project: Hadoop Common
> Issue Type: Improvement
> Components: fs/s3
> Environment: Locally: RHEL 6, AWS S3
> Remotely: AWS EC2 (RHEL 6), AWS S3
> Reporter: D. Granit
> Priority: Minor
> Attachments: HADOOP-9384-v2.patch, HADOOP-9384.patch
>
>
> Currently the S3 native implementation {{org.apache.hadoop.fs.s3native.Jets3tNativeFileSystemStore}}
requires credentials to be set explicitly. Amazon allows setting credentials for instances
instead of users, via roles. Such are rotated frequently and kept in a local cache all of
which is handled by the AWS SDK in this case the {{AmazonS3Client}}. The SDK follows a specific
order to establish whether credentials are set explicitly or via a role:
> - Environment Variables: AWS_ACCESS_KEY_ID and AWS_SECRET_KEY
> - Java System Properties: aws.accessKeyId and aws.secretKey
> - Instance Metadata Service, which provides the credentials associated with the IAM role
for the EC2 instance
> as seen in http://docs.aws.amazon.com/IAM/latest/UserGuide/role-usecase-ec2app.html
> To support this feature the current {{NativeFileSystemStore}} implementation needs to
be altered to use the AWS SDK instead of the JetS3t S3 libraries.
> A request for this feature has previously been raised as part of the Flume project (FLUME-1691)
where the HDFS on top of S3 implementation is used as a manner of logging into S3 via an HDFS
Sink.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)
|