hadoop-common-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Chris Nauroth (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (HADOOP-13091) DistCp masks potential CRC check failures
Date Wed, 04 May 2016 22:21:12 GMT

    [ https://issues.apache.org/jira/browse/HADOOP-13091?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15271539#comment-15271539
] 

Chris Nauroth commented on HADOOP-13091:
----------------------------------------

Let's please test this change using alternative Hadoop-compatible file systems as the source
and the destination, i.e. export from HDFS to S3A and import from WASB to HDFS.  I expect
it's fine, because they'll return {{null}} from {{getFileChecksum}} and that's not dependent
on the current exception handling logic.  I'd like us to test to verify though.

This will require manual testing by creating a distro build, configuring credentials for S3A
and WASB, and running the DistCp commands.  I can volunteer to help as we move towards finalizing
the patch.  If others are able to test too, that would be helpful.

> DistCp masks potential CRC check failures
> -----------------------------------------
>
>                 Key: HADOOP-13091
>                 URL: https://issues.apache.org/jira/browse/HADOOP-13091
>             Project: Hadoop Common
>          Issue Type: Bug
>    Affects Versions: 2.7.1
>            Reporter: Elliot West
>            Assignee: Lin Yiqun
>         Attachments: HDFS-10338.001.patch, HDFS-10338.002.patch
>
>
> There appear to be edge cases whereby CRC checks may be circumvented when requests for
checksums from the source or target file system fail. In this event CRCs could differ between
the source and target and yet the DistCp copy would succeed, even when the 'skip CRC check'
option is not being used.
> The code in question is contained in the method [{{org.apache.hadoop.tools.util.DistCpUtils#checksumsAreEqual(...)}}|https://github.com/apache/hadoop/blob/release-2.7.1/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/util/DistCpUtils.java#L457]
> Specifically this code block suggests that if there is a failure when trying to read
the source or target checksum then the method will return {{true}} (i.e.  the checksums are
equal), implying that the check succeeded. In actual fact we just failed to obtain the checksum
and could not perform the check.
> {code}
> try {
>   sourceChecksum = sourceChecksum != null ? sourceChecksum : 
>     sourceFS.getFileChecksum(source);
>   targetChecksum = targetFS.getFileChecksum(target);
> } catch (IOException e) {
>   LOG.error("Unable to retrieve checksum for " + source + " or "
>     + target, e);
> }
> return (sourceChecksum == null || targetChecksum == null ||
>   sourceChecksum.equals(targetChecksum));
> {code}
> I believe that at the very least the caught {{IOException}} should be re-thrown. If this
is not deemed desirable then I believe an option ({{--strictCrc}}?) should be added to enforce
a strict check where we require that both the source and target CRCs are retrieved, are not
null, and are then compared for equality. If for any reason either of the CRCs retrievals
fail then an exception is thrown.
> Clearly some {{FileSystems}} do not support CRCs and invocations to {{FileSystem.getFileChecksum(...)}}
return {{null}} in these instances. I would suggest that these should fail a strict CRC check
to prevent users developing a false sense of security in their copy pipeline.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


Mime
View raw message