whirr-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Julien Nioche <lists.digitalpeb...@gmail.com>
Subject Re: Hadoop 1.2.1 cluster losing slaves on EC2
Date Wed, 11 Dec 2013 09:11:38 GMT
Hi,

Is there a way I can easily restart the task trackers on a cluster created
by Whirr?

Thanks

Julien


On 10 December 2013 20:15, Julien Nioche <lists.digitalpebble@gmail.com>wrote:

> After a bit of digging I found that my issue seems to be related to
> https://issues.apache.org/jira/browse/MAPREDUCE-2980.  The task trackers
> get killed but the data nodes are fine.
>
> The logs show :
>
> 2013-12-10 10:07:16,179 FATAL org.apache.hadoop.mapred.JettyBugMonitor:
> ************************************************************
> Jetty CPU usage: 46063220198.5%. This is greater than the fatal threshold
> mapred.tasktracker.jetty.cpu.threshold.fatal. Aborting JVM.
> ************************************************************
>
> so not really a Whirr issue as such.
>
> Julien
>
>
>
>
> On 10 December 2013 11:14, Julien Nioche <lists.digitalpebble@gmail.com>wrote:
>
>> Hi,
>>
>> I am using Whirr to launch a Hadoop 1.2.1 cluster on EC2. The cluster is
>> progressively losing slaves up to the point where it does not have any
>> left, the slave instances are still alive and running though.
>>
>> I read somewhere that the conf/slaves file is not used by Whirr so I
>> can't just add them back.
>>
>> Any idea of what could be wrong?
>>
>> Thanks
>>
>> Julien
>>
>> --
>>
>> Open Source Solutions for Text Engineering
>>
>> http://digitalpebble.blogspot.com/
>> http://www.digitalpebble.com
>> http://twitter.com/digitalpebble
>>
>
>
>
> --
>
> Open Source Solutions for Text Engineering
>
> http://digitalpebble.blogspot.com/
> http://www.digitalpebble.com
> http://twitter.com/digitalpebble
>



-- 

Open Source Solutions for Text Engineering

http://digitalpebble.blogspot.com/
http://www.digitalpebble.com
http://twitter.com/digitalpebble

Mime
View raw message