hadoop-common-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Steve Loughran (JIRA)" <j...@apache.org>
Subject [jira] [Comment Edited] (HADOOP-15426) S3guard throttle events => 400 error code => exception
Date Wed, 25 Jul 2018 04:56:00 GMT

    [ https://issues.apache.org/jira/browse/HADOOP-15426?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16554997#comment-16554997
] 

Steve Loughran edited comment on HADOOP-15426 at 7/25/18 4:55 AM:
------------------------------------------------------------------

This is pretty major, as it really means: do the retry logic for the DDB metastore, adding
in the ability to choose a different throttle policy for DDB than from S3 (on the assumption
that it takes serious effort to throttle S3, but for DDB it can happen from an underprovisioned
table, so can be seen more often: need to add more retries, more backoff before giving up)


was (Author: stevel@apache.org):
This is pretty major, as it really means: do the retry logic for the DDB metastore, adding
in the ability to choose a different throttle policy for DDB than from S3 (on the assumption
that it takes serious effort to throttle S3, but for DDB it can happen from an underprovisioned
table, so can be seen more often: need to add more retries, more backoff before giving up)

 

 

> S3guard throttle events => 400 error code => exception
> ------------------------------------------------------
>
>                 Key: HADOOP-15426
>                 URL: https://issues.apache.org/jira/browse/HADOOP-15426
>             Project: Hadoop Common
>          Issue Type: Sub-task
>          Components: fs/s3
>    Affects Versions: 3.1.0
>            Reporter: Steve Loughran
>            Assignee: Steve Loughran
>            Priority: Major
>         Attachments: Screen Shot 2018-07-24 at 15.16.46.png
>
>
> managed to create on a parallel test run
> {code}
> org.apache.hadoop.fs.s3a.AWSServiceThrottledException: delete on s3a://hwdev-steve-ireland-new/fork-0005/test/existing-dir/existing-file:
com.amazonaws.services.dynamodbv2.model.ProvisionedThroughputExceededException: The level
of configured provisioned throughput for the table was exceeded. Consider increasing your
provisioning level with the UpdateTable API. (Service: AmazonDynamoDBv2; Status Code: 400;
Error Code: ProvisionedThroughputExceededException; Request ID: RDM3370REDBBJQ0SLCLOFC8G43VV4KQNSO5AEMVJF66Q9ASUAAJG):
The level of configured provisioned throughput for the table was exceeded. Consider increasing
your provisioning level with the UpdateTable API. (Service: AmazonDynamoDBv2; Status Code:
400; Error Code: ProvisionedThroughputExceededException; Request ID: RDM3370REDBBJQ0SLCLOFC8G43VV4KQNSO5AEMVJF66Q9ASUAAJG)
> 	at 
> {code}
> We should be able to handle this. 400 "bad things happened" error though, not the 503
from S3.
> h3. We need a retry handler for DDB throttle operations



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


Mime
View raw message