hadoop-mapreduce-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Eslam Elnikety <eslam.elnik...@gmail.com>
Subject Reducer-Mapper Communication
Date Sat, 05 Feb 2011 10:19:23 GMT
Dear all,

I need to pass data from a reducer task to a mapper task. Currently, I am
testing Hadoop in a pseudo-distributed mode.

The reducer (org.apache.hadoop.mapred.ReduceTask) executes the following

 InetAddress address = InetAddress.*getByName*("localhost");

 serverSocket = *new* ServerSocket(port, 0, address);

 socket = serverSocket.accept();

 outStream = *new* ObjectOutputStream(socket.getOutputStream());

where the mapper (org.apache.hadoop.mapred.MapTask) executes:

    InetAddress address = InetAddress.*getByName*("localhost");

 socket = *new* Socket(address, port);

 inStream = *new* ObjectInputStream(socket.getInputStream());

The variable port has the same value in both mapper and reducer, and is
dynamically assigned after scanning for a free port on localhost.

The scenario goes like this:
1) The reducer listens correctly on the port (I checked it with netstat)
2) The mapper throws a java.net.ConnectException: Connection refused

I can connect to that open port using a test program from both
localhost/remote machine using the same code of the mapper while the
ReduceTask is waiting on serverSocket.accept(). It only fails when this code
is executed by the mapper. I have tried to replace localhost with loopback
(, and eth0 IP address, but I just get the same behavior as
described above. Any suggestions what might be causing the problem? Thanks!


  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message