whirr-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Andrei Savu <savu.and...@gmail.com>
Subject Re: Amazon EC2 and HTTP 503 errors: RequestLimitExceeded
Date Mon, 31 Oct 2011 14:32:12 GMT
Answers inline.


> I was trying to start an Hadoop cluster of 20 datanodes|tasktrackers.
>
> What is the current upper bound?
>
>
We haven't done any testing to find out but it seems like when starting a
cluster with ~20 nodes jclouds makes too many requests to AWS. We should be
able to overcame this limitation by changing settings.


>
>  I have created a new JIRA issue so that we can add this automatically
>> when the image-id is known:
>> https://issues.apache.org/**jira/browse/WHIRR-416<https://issues.apache.org/jira/browse/WHIRR-416>
>>
>
> I am looking forward to see if this will fix my problem and increase the
> number of nodes of Hadoop clusters one can use via Whirr.


I hope we are going to be able to get this in for 0.8.0.


>
>
>  What if you start a smaller size cluster but with more powerful machines?
>>
>
> An option, but not a good one in the context of MapReduce, isn't it? :-)
> m1.large are powerful (and expensive) enough for what I want to do.
>
>
How about m1.xlarge? (twice as powerful - and *only* twice as expensive).

How are you using Apache Whirr? What's the end result?

You feedback is extremely important for our future roadmap.

Mime
View raw message