hbase-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From <Hariharan_Sethura...@Dell.com>
Subject RE: How to make the client fast fail
Date Wed, 24 Jun 2015 05:51:40 GMT
In our case (0.94.15), we had a timer to interrupt the hanging thread. Subsequently, we are
able to reconnect to hbase and it all worked fine. But we observed the old zookeeper-client
thread(s) still failing to connect in addition to new set of zookeeper-client thread(s) which
are serving with response.
So we scored out the timer option.

Thanks,
Hari

-----Original Message-----
From: Michael Segel [mailto:michael_segel@hotmail.com]
Sent: Thursday, June 11, 2015 5:17 AM
To: user@hbase.apache.org
Subject: Re: How to make the client fast fail

threads?

So that regardless of your hadoop settings, if you want something faster, you can use one
thread for a timer and then the request is in another. So if you hit your timeout before you
get a response, you can stop your thread.
(YMMV depending on side effects... )

> On Jun 10, 2015, at 12:55 AM, PRANEESH KUMAR wrote:
>
> Hi,
>
> I have got the Connection object with default configuration, if the
> zookeeper or HMaster or Region server is down, the client didn't fast
> fail and it took almost 20 mins to thrown an error.
>
> What is the best configuration to make the client fast fail.
>
> Also what is significance of changing the following parameters.
>
> hbase.client.retries.number
> zookeeper.recovery.retry
> zookeeper.session.timeout
> zookeeper.recovery.retry.intervalmill
> hbase.rpc.timeout
>
> Regards,
> Praneesh

Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message