hbase-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Andrew Purtell <apurt...@apache.org>
Subject Re: Yet another Hbase failure
Date Fri, 20 Mar 2009 19:47:52 GMT


But can you claim this is "yet another" HBase failure? Or are
these DFS problems related to running with too small a cluster?
It's been some time so my recollection is hazy, but didn't you
mention you have a cluster of 4 nodes only? 

I found that most of my DFS issues were caused by attempting
to host too much load on too few physical resources, and that
adding nodes to distribute the load solved my problems. 

Running with dfs.datanode.max.xcievers=2048 helped for a while,
but that was an indication that per DataNode load was too high.

Best regards,

   - Andy

> From: Michael Dagaev <michael.dagaev@gmail.com>
> Subject: Re: Yet another Hbase failure
> To: hbase-user@hadoop.apache.org
> Date: Friday, March 20, 2009, 7:04 AM
> Hi, stack
> See the hadoop-site.xml in the attachment.
> dfs.datanode.socket.write.timeout = 0,
> dfs.datanode.max.xcievers=1023
> The hbase-site.xml is not interesting.
> It contains only "hbase.rootdir" and
> "hbase.master"
> I checked the logs (I did not know hbase logged ulimit).
> On all region server hosts ulimit is 32768. On the master
> host ulimit is 1024.
> Thank you for your cooperation,
> M.


View raw message