hbase-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Lukas Nalezenec <lukas.naleze...@firma.seznam.cz>
Subject Re: How to get HBase table size using API
Date Tue, 11 Feb 2014 09:42:55 GMT


I am hbase newbie, maybe there is simpler solution but this will work. I 
tried estimating size using HDFS but it is not best solution(see link [1]);

You dont need to work with TableSplits., look at class 
It can do what you need. Create instance of this class, than call method 
getRegionSizeMap() and sum all values in map. Note that the size 
contains only storeFile sizes, not memStore sizes.
If you need customize behaviour of this class, just copy the code and 
change it.

This class will be in version 0.98 but it was developed on 0.94 - it 
will work but you will have to change some java imports.



On 11.2.2014 08:14, Vikram Singh Chandel wrote:
> Hi Lukas
> the table split constructor expects startRow, endRow and location we won't
> be having info about any of these.
> Moreover we require table size as a whole, not split size.
> We will use the table size to look for a threshold breach in metadata
> table, if breached we have to trigger a delete operation on to the table(of
> which threshold is breached) to delete LRU records until table size is
> within limit (~ 50-60Gb)
> On Mon, Feb 10, 2014 at 6:01 PM, Vikram Singh Chandel <
> vikramsinghchandel@gmail.com> wrote:
>> Hi
>> The requirement is to get the hbase table size (using API) and have to
>> save this size for each table in a metadata table .
>> i tried hdfs command to check table size but need api method (if available)
>> Hadoop fs -du -h hdfs://
>> Thanks
>> --
>> *Regards*
>> Please do not print this email unless it is absolutely necessary,Reduce.
>> Reuse. Recycle. Save our planet.

View raw message