hadoop-common-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Steve Loughran (JIRA)" <j...@apache.org>
Subject [jira] [Created] (HADOOP-15834) Improve throttling on S3Guard DDB batch retries
Date Tue, 09 Oct 2018 15:02:01 GMT
Steve Loughran created HADOOP-15834:

             Summary: Improve throttling on S3Guard DDB batch retries
                 Key: HADOOP-15834
                 URL: https://issues.apache.org/jira/browse/HADOOP-15834
             Project: Hadoop Common
          Issue Type: Sub-task
          Components: fs/s3
    Affects Versions: 3.2.0
            Reporter: Steve Loughran

the batch throttling may fail too fast 

if there's batch update of 25 writes but the default retry count is nine attempts, only nine
writes of the batch may be attempted...even if each attempt is actually successfully writing

In contrast, a single write of a piece of data gets the same no. of attempts, so 25 individual
writes can handle a lot more throttling than a bulk write.

Proposed: retry logic to be more forgiving of batch writes, such as not consider a batch call
where at least one data item was written to count as a failure

This message was sent by Atlassian JIRA

To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org

View raw message