hbase-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Stack <st...@duboce.net>
Subject Re: Failure to get HTable using MapReduce on some Nodes
Date Fri, 15 Jul 2011 16:37:15 GMT
On Fri, Jul 15, 2011 at 7:36 AM, Adam Shook <ashook@clearedgeit.com> wrote:
> I am running a MapReduce job using standard input and output formats and using an HTable
as a reference data set in my Mapper code.  I am using a small cluster of around 10 nodes.
 In my setup phase I am using an HTablePool to get a reference to a table.  On all but two
nodes, the call to getting the table hangs and eventually causes the task to fail.  However,
on two of the 10 machines, it retrieves the table and its business as usual.  (I also just
tried creating a new HTable without the pool - no dice).
>

Any exception thrown?

> It just so happens that the two machines I can successfully get the table on are configured
in the hbase-site.xml file for the hbase.zookeeper.quorum file.
>

Any other configuration differences on these machines?  Are these
using localhost to find zk and finding something because zk instance
is actually running locally (whereas the others fail to find a zk
member)?

>  I was told that the other machines don't need to be in this file - ZooKeeper will handle
everything and I should be able to get a table just fine.  The cluster is configured so HBase
is not managing ZooKeeper.
>
This should be fine.

> If I ssh into any of the 8 machines that don't work, I am able to use the HBase shell
and scan through a table.
>

If you look at your UI are all ten machines showing, all with regions
loaded?  If you run a count from the shell, it runs through all of
your table contents? (Could take a while if big)

St.Ack

> Any help would be very much appreciated.
>
> Thanks!
> Adam
>

Mime
View raw message