hbase-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From grailcattt <pans...@yahoo.com>
Subject HBase/Zookeeper -- System Fails when IP Address Changes
Date Tue, 01 Feb 2011 15:16:23 GMT

I have hadoop/hbase running on a notebook as my dev env. I have everything
set up to use localhost which is defined as 127.0.0.1 in my /etc/hosts (and
removed other entries for localhost).

The system works great all day, but when I go home and try to start the
system, it doesn't work with a different IP address.

First thing I notice is from my namenode log is this cryptic INFO line:
<code>
2011-02-01 07:56:09,696 INFO org.apache.hadoop.ipc.Server: IPC Server
handler 6 on 8020, call delete(/usr/share/hadoop/mapred/system, true) from
127.0.0.1:49216: error:
org.apache.hadoop.hdfs.server.namenode.SafeModeException: Cannot delete
/usr/share/hadoop/mapred/system. Name node is in safe mode.
The ratio of reported blocks 0.0000 has not reached the threshold 0.9990.
Safe mode will be turned off automatically.
org.apache.hadoop.hdfs.server.namenode.SafeModeException: Cannot delete
/usr/share/hadoop/mapred/system. Name node is in safe mode.
The ratio of reported blocks 0.0000 has not reached the threshold 0.9990.
Safe mode will be turned off automatically.
	at
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.deleteInternal(FSNamesystem.java:1700)
</code>

Hadoop eventually exits safe mode, but this does not happen during startup
when the system is working.

Next thing I notice upon starting hbase, in my namenode log:
<code>
2011-02-01 08:04:34,313 INFO
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit:
ugi=pansonm,staff,hadoop,com.apple.access_screensharing,_developer,_lpoperator,_lpadmin,_appserveradm,admin,_appserverusr,localaccounts,everyone,hadoop1,com.apple.sharepoint.group.1,com.apple.sharepoint.group.2
ip=/127.0.0.1	cmd=mkdirs	src=/hbase/.logs/192.168.1.12,49320,1296572670348
dst=null	perm=pansonm:supergroup:rwxr-xr-x
</code>

NOTICE: The reference to my LAN IP address 192.168.1.12. This isn't an
error, but it is curious that hadoop is not using localhost/127.0.0.1
somewhere.

The main problem appears in my hbase log:

<code>
2011-02-01 08:04:38,900 DEBUG
org.apache.hadoop.hbase.master.handler.OpenedRegionHandler: Opened region
-ROOT-,,0.70236052 on 192.168.1.12,49320,1296572670348
2011-02-01 08:04:58,931 INFO org.apache.hadoop.ipc.HbaseRPC: Problem
connecting to server: 192.168.1.2/192.168.1.2:51038
2011-02-01 08:04:59,934 FATAL org.apache.hadoop.hbase.master.HMaster:
Unhandled exception. Starting shutdown.
java.net.SocketException: Host is down
	at sun.nio.ch.Net.connect(Native Method)
	at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:507)
	at
org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:192)
</code>

The reference to 192.168.1.2  certainly won't work since this is my old
address. And it appears that this hard-coded address is used as a locator
for data. 

The only solution I have now is deleting all the data and reformatting HDFS
-- which I'm now doing twice per day.

Thanks much for your help. 
-- 
View this message in context: http://old.nabble.com/HBase-Zookeeper----System-Fails-when-IP-Address-Changes-tp30816966p30816966.html
Sent from the HBase User mailing list archive at Nabble.com.


Mime
View raw message