hadoop-common-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Steve Loughran (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (HADOOP-11851) s3n to swallow IOEs on inner stream close
Date Thu, 23 Apr 2015 21:48:40 GMT

    [ https://issues.apache.org/jira/browse/HADOOP-11851?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14509893#comment-14509893
] 

Steve Loughran commented on HADOOP-11851:
-----------------------------------------

This turns out to be a different symptom of the HADOOP-11570 problem; the chunked stream reader
is trying to read to the end of the input stream. 

That patched S3a.close() to shut the stream down more aggressively. We don't have a patch
for s3n to do the same.  Looking at HADOOP-11570 though, it's vulnerable to the same problem
of a clean close() triggering an exception. It needs a more robust close() operator too

> s3n to swallow IOEs on inner stream close
> -----------------------------------------
>
>                 Key: HADOOP-11851
>                 URL: https://issues.apache.org/jira/browse/HADOOP-11851
>             Project: Hadoop Common
>          Issue Type: Improvement
>          Components: fs/s3
>    Affects Versions: 2.6.0
>            Reporter: Steve Loughran
>            Assignee: Takenori Sato
>            Priority: Minor
>
> We've seen a situation where some work was failing from (recurrent) connection reset
exceptions.
> Irrespective of the root cause, these were surfacing not in the read operations, but
when the input stream was being closed -including during a seek()
> These exceptions could be caught & logged & warn, rather than trigger immediate
failures. It shouldn't matter to the next GET whether the last stream closed prematurely,
as long as the new one works



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Mime
View raw message