hadoop-common-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Chris Nauroth (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (HADOOP-8973) DiskChecker cannot reliably detect an inaccessible disk on Windows with NTFS ACLs
Date Tue, 05 Mar 2013 04:11:12 GMT

    [ https://issues.apache.org/jira/browse/HADOOP-8973?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13593055#comment-13593055

Chris Nauroth commented on HADOOP-8973:

Thanks for the comments, everyone.  This is very helpful.

Does this change the current directory of the calling process?

No, this forks a whole new process, and runs the cd within that process.  The working directory
of the calling process is unchanged.  I think this is safe.

Leveriging winutils in CheckDisk would provide a nice symmetry in the test.

Chris, I think it should be relatively easy to provide some API like this either through winutils
or JNI.

On all of the other discussion points, I think the summary is that we have discovered that
there are deficiencies in the current logic of {{DiskChecker}}, and it's not a problem specific
to Windows.  It's a problem on Linux too.  Considering this, I'd still like to proceed with
the basic approach in the current patch.  We can file a follow-up jira to fix the problem
more completely, with full consideration for other permission models that include things like
POSIX ACLs and NTFS ACLs.  (My opinion is that we should just wait for JDK7 instead of investing
in JNI calls, but that's just my opinon.)  The scope of this follow-up jira would include
both Linux and Windows.

Arpit had provided some feedback on the actual code, and I do want to provide a new patch
to address that feedback.  I'm planning on uploading a new patch tomorrow.  If anyone disagrees
with the approach though, please let me know so that I don't waste time preparing a patch
that is objectionable.  :-)

Well. In that case, it would be a standard pattern everywhere because everywhere code simply
checks the value of the permissions and not whether the process checking that value actually
has the right membership wrt that value. Isnt it so? Irrespective of OS.

I'm reluctant to change the code so that the permission checks are less comprehensive on Linux
for the sake of cross-platform consistency.  Right now, we have one overload of {{DiskChecker#checkDir}}
that is correct AFAIK, and another overload of {{DiskChecker#checkDir}} that is incomplete
when considering more sophisticated permission models on the local file system, like POSIX
ACLs.  The approach in the current patch at least achieves consistent behavior between Linux
and Windows, so at least we have symmetry with regards to that.

> DiskChecker cannot reliably detect an inaccessible disk on Windows with NTFS ACLs
> ---------------------------------------------------------------------------------
>                 Key: HADOOP-8973
>                 URL: https://issues.apache.org/jira/browse/HADOOP-8973
>             Project: Hadoop Common
>          Issue Type: Bug
>          Components: util
>    Affects Versions: trunk-win
>            Reporter: Chris Nauroth
>            Assignee: Chris Nauroth
>         Attachments: HADOOP-8973-branch-trunk-win.patch
> DiskChecker.checkDir uses File.canRead, File.canWrite, and File.canExecute to check if
a directory is inaccessible.  These APIs are not reliable on Windows with NTFS ACLs due to
a known JVM bug.

This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

View raw message