hadoop-common-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Daniel Templeton (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (HADOOP-12374) Description of hdfs expunge command is confusing
Date Tue, 08 Sep 2015 13:31:46 GMT

    [ https://issues.apache.org/jira/browse/HADOOP-12374?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14734791#comment-14734791

Daniel Templeton commented on HADOOP-12374:

Thank you for doing HADOOP-5323!  Very useful documentation.  I wasn't looking at the latest
docs, so I missed it.

Updating the patch to link directly to that section would absolutely be helpful.  It would
also be nice to mention that File Deletes and Undeletes is in the Space Reclamation section.
 Because that's the last thing in the doc, the link points the browser further up the page.
 Knowing the major section title would help folks realize where the right text is.

I do still have one concern: my original one.  The doc text says "checkpoint," but nowhere
is that term defined.  Can we find a different way to phrase it?  What about something like:

  If trash is enabled when a file is deleted, HDFS instead moves the deleted file to a trash
directory. This command causes HDFS to permanently delete files from the trash that are older
than the retention threshold.  See [your link] for more information.

I don't think the details about checkpointing are important to have here.  I don't think the
average user cares, and there's a link for those who do.

> Description of hdfs expunge command is confusing
> ------------------------------------------------
>                 Key: HADOOP-12374
>                 URL: https://issues.apache.org/jira/browse/HADOOP-12374
>             Project: Hadoop Common
>          Issue Type: Bug
>          Components: documentation, trash
>    Affects Versions: 2.7.0, 2.7.1
>            Reporter: Weiwei Yang
>            Assignee: Weiwei Yang
>            Priority: Trivial
>              Labels: docuentation, newbie, suggestions, trash
>         Attachments: HADOOP-12374.001.patch
> Usage: hadoop fs -expunge
> Empty the Trash. Refer to the HDFS Architecture Guide for more information on the Trash
> this description is confusing. It gives user the impression that this command will empty
trash, but actually it only removes old checkpoints. If user sets a pretty long value for
fs.trash.interval, this command will not remove anything until checkpoints exist longer than
this value.

This message was sent by Atlassian JIRA

View raw message