hbase-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Anoop Sam John <anoo...@huawei.com>
Subject RE: Reg:delete performance on HBase table
Date Thu, 06 Dec 2012 04:35:13 GMT
Hi Manoj
        If I read you correctly, I think you want to aggregate some 3,4 days of data and those
data you want to get deleted.  Can you think of creating tables for this period (one table
for 4 days) and aggregate and drop the table?  Then for the next 4 days another table?

Or another option is TTL which HBase provides.

From: Manoj Babu [manoj444@gmail.com]
Sent: Thursday, December 06, 2012 8:44 AM
To: user
Subject: Re: Reg:delete performance on HBase table


Thank you very much for the valuable information.

HBase version am using is:
HBase Version0.90.3-cdh3u1, r

Use case is:
We are collecting information on where the user is spending time in our
site(tracking the user events) also we are doing historical data migration
from existing system also based on the data we need to populate metrics for
the year. like Customer A hits option x n times, hits option y n
times, Customer B hits option x1 n times, hits option y1 n time.

Earlier by using Hadoop MapReduce we are aggregating the whole year data
every 2 or 4 days once and using DBOutputFormat emiting to Oracle Table and
for inserting 181 Million rows it took only 20 mins through 20 reducers
hitting parallel so before populating the year table we use to delete
the existing 181 Million rows of that year alone but it tooks more than
3hrs even not deleted then by killing the session done a truncate actually
we are in development stage so planning to try HBase for this case since
delete is taking too much time in oracle for millions of rows.

Need to delete rows based on the year only cannot drop, In oracle also
truncate is extremely fast.


On Wed, Dec 5, 2012 at 11:44 PM, Nick Dimiduk <ndimiduk@gmail.com> wrote:

> On Wed, Dec 5, 2012 at 7:46 AM, Doug Meil <doug.meil@explorysmedical.com
> >wrote:
> > You probably want to read this section on the RefGuide about deleting
> from
> > HBase.
> >
> > http://hbase.apache.org/book.html#perf.deleting
> So hold on. From the guide:
> 11.9.2. Delete RPC Behavior
> >
> > Be aware that htable.delete(Delete) doesn't use the writeBuffer. It will
> > execute an RegionServer RPC with each invocation. For a large number of
> > deletes, consider htable.delete(List).
> >
> > See
> >
> http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/client/HTable.html#delete%28org.apache.hadoop.hbase.client.Delete%29
> So Deletes are like Puts except they're not executed the same why. Indeed,
> HTable.put() is implemented using the write buffer while HTable.delete()
> makes a MutateRequest directly. What is the reason for this? Why is the
> semantic of Delete subtly different from Put?
> For that matter, why not buffer all mutation operations?
> HTable.checkAndPut(), checkAndDelete() both make direct MutateRequest calls
> as well.
> Thanks,
> -n
View raw message