hbase-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Vladimir Rodionov <vrodio...@carrieriq.com>
Subject RE: Scanner Caching with wildly varying row widths
Date Tue, 05 Nov 2013 00:38:50 GMT
setBatch and setCaching are totally independent from each other. The latter one controls numbers
of rows transferred from server to client in
one RPC call, the former one controls how many cells (key values) read from underlying storage
per one call in HBase InternalScanner implementation.

To avoid OOME in Scan operation one should use setBatch with appropriate limit.

Best regards,
Vladimir Rodionov
Principal Platform Engineer
Carrier IQ, www.carrieriq.com
e-mail: vrodionov@carrieriq.com

________________________________________
From: Dhaval Shah [prince_mithibai@yahoo.co.in]
Sent: Monday, November 04, 2013 3:10 PM
To: user@hbase.apache.org
Subject: Re: Scanner Caching with wildly varying row widths

You can use scan.setBatch() to limit the number of columns returned.. Note that it will split
up a row into multiple rows from a client's perspective and client code might need to be modified
to make use of the setBatch feature

Regards,
Dhaval


________________________________
 From: Patrick Schless <patrick.schless@gmail.com>
To: user <user@hbase.apache.org>
Sent: Monday, 4 November 2013 6:03 PM
Subject: Scanner Caching with wildly varying row widths


We have an application where a row can contain anywhere between 1 and
3600000 cells (there's only 1 column family). In practice, most rows have
under 100 cells.

Now we want to run some mapreduce jobs that touch every cell within a range
(eg count how many cells we have).  With scanner caching set to something
like 250, the job will chug along for a long time, until it hits a row with
a lot of data, then it will die.  Setting the cache size down to 1 (row)
would presumably work, but take forever to run.

We have addressed this by writing some jobs that use coprocessors, which
allow us to pull back sets of cells instead of sets of rows, but this means
we can't use any of the built-in jobs that come with hbase (eg copyTable).
Is there any way around this? Have other people had to deal with such high
variability in their row sizes?

Confidentiality Notice:  The information contained in this message, including any attachments
hereto, may be confidential and is intended to be read only by the individual or entity to
whom this message is addressed. If the reader of this message is not the intended recipient
or an agent or designee of the intended recipient, please note that any review, use, disclosure
or distribution of this message or its attachments, in any form, is strictly prohibited. 
If you have received this message in error, please immediately notify the sender and/or Notifications@carrieriq.com
and delete or destroy any copy of this message and its attachments.

Mime
View raw message