hbase-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Jean-Daniel Cryans" <jdcry...@gmail.com>
Subject Re: Hbase single-Node cluster config problem
Date Fri, 01 Aug 2008 13:13:57 GMT
Yair,

It seems that your master is unable to communicate with HDFS (that's the
SocketTimeoutException). To correct this, I would check that HDFS is running
by looking at the web UI, I would make sure that the ports are open (using
telnet for example) and I would also check that HDFS uses the default ports.

J-D

On Fri, Aug 1, 2008 at 5:40 AM, Yabo-Arber Xu <arber.research@gmail.com>wrote:

> Greetings,
>
> I am trying to set up a hbase cluster. To simplify the setting, i first
> tried the single node cluster, where HDFS name/data node are set on one
> computer, and hbase master/regionserver are also set on the same computer.
> The HDFS passed the test and works well. But, for hbase, when I try to
> create a table using hbase shell. It keeps popping the following message:
>
> 08/08/01 02:30:29 INFO ipc.Client: Retrying connect to server:
> ec2-67-202-24-167.compute-1.amazonaws.com/10.254.199.132:60000. Already
> tried 1 time(s).
>
> I checked the hbase log, and it has the following error:
>
> 2008-08-01 02:30:24,337 ERROR org.apache.hadoop.hbase.HMaster: Can not
> start
> master
> java.lang.reflect.InvocationTargetException
>        at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native
> Method)
>        at
>
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39)
>        at
>
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27)
>        at java.lang.reflect.Constructor.newInstance(Constructor.java:513)
>        at org.apache.hadoop.hbase.HMaster.doMain(HMaster.java:3313)
>        at org.apache.hadoop.hbase.HMaster.main(HMaster.java:3347)
> Caused by: java.net.SocketTimeoutException: timed out waiting for rpc
> response
>        at org.apache.hadoop.ipc.Client.call(Client.java:514)
>        at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:198)
>        at org.apache.hadoop.dfs.$Proxy0.getProtocolVersion(Unknown Source)
>        at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:291)
>        at
> org.apache.hadoop.dfs.DFSClient.createNamenode(DFSClient.java:128)
>        at org.apache.hadoop.dfs.DFSClient.<init>(DFSClient.java:151)
>        at
>
> org.apache.hadoop.dfs.DistributedFileSystem.initialize(DistributedFileSystem.java:65)
>        at
> org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:1182)
>        at org.apache.hadoop.fs.FileSystem.access$300(FileSystem.java:55)
>        at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:1193)
>        at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:150)
>
>
> For your information, i also attach the hbase-site.xml:
>
>  <property>
>    <name>hbase.master</name>
>    <value>ec2-67-202-24-167.compute-1.amazonaws.com:60000</value>
>    <description>The host and port that the HBase master runs at.
>    </description>
>  </property>
>
>  <property>
>    <name>hbase.rootdir</name>
>    <value>hdfs://ec2-67-202-24-167.compute-1.amazonaws.com:9000/hbase
> </value>
>    <description>The directory shared by region servers.
>    </description>
>  </property>
>
> Can anybody point out what i did wrong?
>
> Thanks in advance
>
> -Arber
>

Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message