hadoop-mapreduce-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Wellington Chevreuil <wellington.chevre...@gmail.com>
Subject Re: Duplicating map reduce tasks
Date Wed, 02 Jul 2014 13:17:26 GMT
Hi Tina,

That's not controllable with hadoop mr. Hadoop sometimes will do this on its own (if you have
speculative execution enabled), just for performance reasons case a given task is taking to
much to complete under a given node, and running the same code under replica of the same file
will always produce same results, then it launches the same task under different nodes, and
once the first is finished, the result is ready and the others are killed.

Cheers.  

On 2 Jul 2014, at 07:00, Tina Samuel <tinasamuel89@gmail.com> wrote:

> Hi,
> I would like to provide the map reduce jobs in the following format :-
> 
> <Map Reduce task> <number_of_replicas>
> 
> I want to execute the specified task the specified number of
> times(number_of_replicas) on different nodes and then I want to compare the
> results produced by these task copies. Is it possible to do this in Hadoop
> Map reduce? If not possible, is there any means by which I can modify the
> code of map reduce so that I can do it?
> 
> Thanks & Regards,
> Tina


Mime
View raw message