hadoop-common-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Chris Nauroth (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (HADOOP-12718) Incorrect error message by fs -put local dir without permission
Date Mon, 16 May 2016 15:57:13 GMT

    [ https://issues.apache.org/jira/browse/HADOOP-12718?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15284760#comment-15284760
] 

Chris Nauroth commented on HADOOP-12718:
----------------------------------------

[~stevel@apache.org], I agree with those suggestions for tightening up the spec if we keep
this patch, but first I have a more fundamental question about whether or not we keep the
patch at all.

Users of the local file system might not always expect the semantics of the spec and contract
tests (as mostly reverse engineered from HDFS).  In some cases, the caller is seeking to emulate
HDFS semantics, such as mini-cluster based tests.  In other cases, such as {{LocalDirAllocator}},
a caller explicitly calls {{FileSystem#getLocal}} and expects to work with the semantics of
the local file system.  For example, I mentioned on HADOOP-9507/HADOOP-13082 that there had
been a case in the past of Hive expecting very particular semantics from the local file system.

Unfortunately, aside from full testing of the whole ecosystem, it's hard to know for sure
that this change won't break something, because it's going to start throwing an exception
that users of the local file system hadn't seen before.  This is why I was inclined to revert,
at least within the 2.x line.  I'd appreciate your thoughts on whether or not this makes sense
or I'm being overly paranoid.  Thanks!

> Incorrect error message by fs -put local dir without permission
> ---------------------------------------------------------------
>
>                 Key: HADOOP-12718
>                 URL: https://issues.apache.org/jira/browse/HADOOP-12718
>             Project: Hadoop Common
>          Issue Type: Bug
>            Reporter: John Zhuge
>            Assignee: John Zhuge
>            Priority: Minor
>              Labels: supportability
>             Fix For: 2.8.0
>
>         Attachments: HADOOP-12718.001.patch, HADOOP-12718.002.patch, HADOOP-12718.003.patch,
TestFsShellCopyPermission-output.001.txt, TestFsShellCopyPermission-output.002.txt, TestFsShellCopyPermission.001.patch
>
>
> When the user doesn't have access permission to the local directory, the "hadoop fs -put"
command prints a confusing error message "No such file or directory".
> {noformat}
> $ whoami
> systest
> $ cd /home/systest
> $ ls -ld .
> drwx------. 4 systest systest 4096 Jan 13 14:21 .
> $ mkdir d1
> $ sudo -u hdfs hadoop fs -put d1 /tmp
> put: `d1': No such file or directory
> {noformat}
> It will be more informative if the message is:
> {noformat}
> put: d1 (Permission denied)
> {noformat}
> If the source is a local file, the error message is ok:
> {noformat}
> put: f1 (Permission denied)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


Mime
View raw message