hadoop-common-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Steve Loughran (JIRA)" <j...@apache.org>
Subject [jira] [Resolved] (HADOOP-13572) fs.s3native.mkdirs does not work if the user is only authorized to a subdirectory
Date Thu, 01 Sep 2016 18:21:20 GMT

     [ https://issues.apache.org/jira/browse/HADOOP-13572?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel

Steve Loughran resolved HADOOP-13572.
    Resolution: Duplicate

> fs.s3native.mkdirs does not work if the user is only authorized to a subdirectory
> ---------------------------------------------------------------------------------
>                 Key: HADOOP-13572
>                 URL: https://issues.apache.org/jira/browse/HADOOP-13572
>             Project: Hadoop Common
>          Issue Type: Bug
>          Components: fs/s3
>    Affects Versions: 2.7.2
>            Reporter: Marcin Zukowski
>            Priority: Minor
> Noticed that when working with Spark. I have an S3 bucket with top directories having
protected access, and a dedicated open directory deeper in the tree for Spark temporary data.
> Writing to this directory fails with the following stack
> {noformat}
> [info]   org.apache.hadoop.fs.s3.S3Exception: org.jets3t.service.S3ServiceException:
S3 HEAD request failed for '/SPARK-SNOWFLAKEDB' - ResponseCode=403, ResponseMessage=Forbidden
> [info]   at org.apache.hadoop.fs.s3native.Jets3tNativeFileSystemStore.handleServiceException(Jets3tNativeFileSystemStore.java:245)
> [info]   at org.apache.hadoop.fs.s3native.Jets3tNativeFileSystemStore.retrieveMetadata(Jets3tNativeFileSystemStore.java:119)
> [info]   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> [info]   at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> [info]   at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> [info]   at java.lang.reflect.Method.invoke(Method.java:497)
> [info]   at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:186)
> [info]   at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
> [info]   at org.apache.hadoop.fs.s3native.$Proxy34.retrieveMetadata(Unknown Source)
> [info]   at org.apache.hadoop.fs.s3native.NativeS3FileSystem.getFileStatus(NativeS3FileSystem.java:414)
> [info]   at org.apache.hadoop.fs.s3native.NativeS3FileSystem.mkdir(NativeS3FileSystem.java:539)
> [info]   at org.apache.hadoop.fs.s3native.NativeS3FileSystem.mkdirs(NativeS3FileSystem.java:532)
> [info]   at org.apache.hadoop.fs.FileSystem.mkdirs(FileSystem.java:1933)
> [info]   at org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter.setupJob(FileOutputCommitter.java:291)
> [info]   at org.apache.hadoop.mapred.FileOutputCommitter.setupJob(FileOutputCommitter.java:131)
> {noformat}
> I believe this is because mkdirs in NativeS3FileSystem.java tries to create directories
starting "from the root", and so if the process can't "list" objects on a given level, it
fails. Perhaps it should accept this kind of failures, or go "from the leaf" first to find
the level from which it needs to start creating directories. That might also be good for performance
assuming the directories exist most of the time.

This message was sent by Atlassian JIRA

To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org

View raw message