hbase-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Doug Meil <doug.m...@explorysmedical.com>
Subject Re: HBase client slows down
Date Tue, 09 Oct 2012 17:50:19 GMT

So you're running on a single regionserver?

On 10/9/12 1:44 PM, "Mohit Anchlia" <mohitanchlia@gmail.com> wrote:

>I am using HTableInterface as a pool but I don't see any setautoflush
>method. I am using 0.92.1 jar.
>Also, how can I see if RS is getting overloaded? I looked at the UI and I
>don't see anything obvious:
>equestsPerSecond=0, numberOfOnlineRegions=1, numberOfStores=1,
>numberOfStorefiles=1, storefileIndexSizeMB=0, rootIndexSizeKB=1,
>totalStaticIndexSizeKB=0, totalStaticBloomSizeKB=0, memstoreSizeMB=27,
>readRequestsCount=126, writeRequestsCount=96157, compactionQueueSize=0,
>flushQueueSize=0, usedHeapMB=44, maxHeapMB=3976, blockCacheSizeMB=8.79,
>blockCacheFreeMB=985.34, blockCacheCount=11, blockCacheHitCount=23,
>blockCacheMissCount=28, blockCacheEvictedCount=0, blockCacheHitRatio=45%,
>blockCacheHitCachingRatio=67%, hdfsBlocksLocalityIndex=100
>On Tue, Oct 9, 2012 at 10:32 AM, Doug Meil
>> It's one of those "it depends" answers.
>> See this firstŠ
>> http://hbase.apache.org/book.html#perf.writing
>> Š Additionally, one thing to understand is where you are writing data.
>> Either keep track of the requests per RS over the period (e.g., the web
>> interface), or you can also track it on the client side with...
>> getRegionLocation%28byte[],%20boolean%29
>> Š to know if you are continually hitting the same RS or spreading the
>> On 10/9/12 1:27 PM, "Mohit Anchlia" <mohitanchlia@gmail.com> wrote:
>> >I just have 5 stress client threads writing timeseries data. What I
>>see is
>> >after few mts HBaseClient slows down and starts to take 4 secs. Once I
>> >kill
>> >the client and restart it stays at sustainable rate for about 2 mts and
>> >then again it slows down. I am wondering if there is something I
>>should be
>> >doing on the HBaseclient side? All the request are similar in terms of
>> >data.

View raw message