hbase-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Lukas Nalezenec <lukas.naleze...@firma.seznam.cz>
Subject Re: How to get HBase table size using API
Date Tue, 18 Feb 2014 10:26:13 GMT
Hi,
Add this import:
import org.apache.hadoop.hbase.HServerLoad;

And change names of classes:
ServerLoad -> HServerLoad
RegionLoad -> HServerLoad.RegionLoad

Lukas

On 18.2.2014 11:19, Vikram Singh Chandel wrote:
> Hi Lucas
> As you said that RegionSizeCalculator is developed on top of 0.94, the
> class has interdependencies vig
> import org.apache.hadoop.hbase.RegionLoad;
> import org.apache.hadoop.hbase.ServerLoad;
>
> unable to find these classes in 0.94.X
>
> are these classes available in 0.94 under some other package,
>
>
>
> On Tue, Feb 11, 2014 at 3:12 PM, Lukas Nalezenec <
> lukas.nalezenec@firma.seznam.cz> wrote:
>
>> Hi,
>>
>> I am hbase newbie, maybe there is simpler solution but this will work. I
>> tried estimating size using HDFS but it is not best solution(see link [1]);
>>
>> You dont need to work with TableSplits., look at class
>> org.apache.hadoop.hbase.util.RegionSizeCalculator.
>> It can do what you need. Create instance of this class, than call method
>> getRegionSizeMap() and sum all values in map. Note that the size contains
>> only storeFile sizes, not memStore sizes.
>> If you need customize behaviour of this class, just copy the code and
>> change it.
>>
>> This class will be in version 0.98 but it was developed on 0.94 - it will
>> work but you will have to change some java imports.
>>
>>
>> [1]
>> https://issues.apache.org/jira/browse/HBASE-10413?
>> focusedCommentId=13889745&page=com.atlassian.jira.
>> plugin.system.issuetabpanels:comment-tabpanel#comment-13889745
>>
>> Lukas
>>
>>
>>
>> On 11.2.2014 08:14, Vikram Singh Chandel wrote:
>>
>>> Hi Lukas
>>>
>>> the table split constructor expects startRow, endRow and location we won't
>>> be having info about any of these.
>>> Moreover we require table size as a whole, not split size.
>>>
>>> We will use the table size to look for a threshold breach in metadata
>>> table, if breached we have to trigger a delete operation on to the
>>> table(of
>>> which threshold is breached) to delete LRU records until table size is
>>> within limit (~ 50-60Gb)
>>>
>>>
>>> On Mon, Feb 10, 2014 at 6:01 PM, Vikram Singh Chandel <
>>> vikramsinghchandel@gmail.com> wrote:
>>>
>>>   Hi
>>>> The requirement is to get the hbase table size (using API) and have to
>>>> save this size for each table in a metadata table .
>>>>
>>>> i tried hdfs command to check table size but need api method (if
>>>> available)
>>>>
>>>> Hadoop fs -du -h hdfs://
>>>>
>>>>
>>>>
>>>> Thanks
>>>>
>>>> --
>>>> *Regards*
>>>>
>>>> *VIKRAM SINGH CHANDEL*
>>>>
>>>>
>>>> Please do not print this email unless it is absolutely necessary,Reduce.
>>>> Reuse. Recycle. Save our planet.
>>>>
>>>>
>>>
>


Mime
View raw message