jclouds-notifications mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Andrew Gaul (JIRA)" <j...@apache.org>
Subject [jira] [Updated] (JCLOUDS-1203) aws-ec2 rate-limiting causes provisioning to fail: need longer back-off/retry
Date Sat, 22 Apr 2017 20:03:04 GMT

     [ https://issues.apache.org/jira/browse/JCLOUDS-1203?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]

Andrew Gaul updated JCLOUDS-1203:
---------------------------------
    Component/s: jclouds-compute

> aws-ec2 rate-limiting causes provisioning to fail: need longer back-off/retry
> -----------------------------------------------------------------------------
>
>                 Key: JCLOUDS-1203
>                 URL: https://issues.apache.org/jira/browse/JCLOUDS-1203
>             Project: jclouds
>          Issue Type: Bug
>          Components: jclouds-compute
>    Affects Versions: 1.9.2, 2.0.0
>            Reporter: Aled Sage
>              Labels: aws-ec2
>
> TL;DR: increase default retry/back-off to 500ms and 6 retries.
> In Apache Brooklyn (which uses jclouds), we hit {{Request limit exceeded}} when provisioning
VMs in aws-ec2 [1]. We were provisioning multiple machines concurrently: different threads
were independently calling {{createNodesInGroup}}. The default exponential backoff and retry
within jclouds wasn't enough.
> My understanding is that AWS will rate-limit based on the nature (as well as number)
of API calls. For example, if creating/modifying security groups is a more expensive operation
(from AWS's perspective) than a simple poll for a machine's state, then those requests would
cause rate-limiting sooner.
> Within jclouds, the defaults are {{retryCountLimit = 5}} and {{delayStart = 50ms}} (see
[2]).
> This means we retry with the back-offs being (approximately) 50ms, 100ms, 200ms, 400ms
and 500ms.
> We overrode the defaults to be 500ms and 6 retries, and could then successfully provision
20 VMs concurrently. Six of the 20 calls to {{RunInstances}} were rate-limited. It took several
retries before the request was accepted, having to back off for more than 4 seconds in some
cases.
> At worst, the existing short back-off may make things worse (the overly aggressive retry
might cause other concurrent calls to also be rate-limited).
> At best, the short back-off just isn't long enough so that particular VM provisioning
fails. For example, if AWS uses a leaky bucket algorithm [3] then hopefully some requests
would keep on getting through. But AWS don't publicise such details of their algorithm/implementation,
I believe.
> [1] https://issues.apache.org/jira/browse/BROOKLYN-394
> [2] https://github.com/jclouds/jclouds/blob/rel/jclouds-2.0.0/core/src/main/java/org/jclouds/http/handlers/BackoffLimitedRetryHandler.java#L81-#L87
> [3] http://en.wikipedia.org/wiki/Leaky_bucket



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

Mime
View raw message