I think we've got few users that use Whirr to deploy clusters with more than 10 nodes.

My suggestion is to take a look at the configuration page because there are some settings you can tweak so that Whirr can start larger clusters.

Tibor any feedback on this? How are you handling similar issues?

On Oct 7, 2011 5:07 PM, "Paolo Castagna" <> wrote:
I am using Apache Whirr 0.6.0-incubating.

When I start an Hadoop cluster on EC2 using 11 datanodes/tasktrackers:
whirr.instance-templates=1 hadoop-namenode+hadoop-jobtracker,11
everything seems to go fine. I sometimes see one or two instances not
able to start correctly,
but Whirr seems to terminate those and restart new ones.

If I try to run an Hadoop cluster using 20 or more
datanodes/tasktrackers the amount of errors increases.

I see a lot of errors like this:

2011-10-07 07:54:50,058 ERROR [jclouds.compute] (user thread 13) <<
problem applying options to node(eu-west-1/i-eec231a7): request POST HTTP/1.1 failed with code 503,
error: AWSError{requestId='af239496-844a-49c3-99d0-fdf0d01b7f45',
requestToken='null', code='RequestLimitExceeded', message='Request
limit exceeded.', context='{Response=, Errors=}'}
       at org.jclouds.http.handlers.DelegatingErrorHandler.handleError(
       at org.jclouds.http.internal.BaseHttpCommandExecutorService$HttpResponseCallable.shouldContinue(
       at org.jclouds.http.internal.BaseHttpCommandExecutorService$
       at org.jclouds.http.internal.BaseHttpCommandExecutorService$
       at java.util.concurrent.FutureTask$Sync.innerRun(
       at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(
       at java.util.concurrent.ThreadPoolExecutor$

After a while Whirr gives up and fail to start the cluster.

Any idea on why this happens?