Of course not in HBase, but the server is Phoenix so there's where the VLH could be implemented like this:
- run the query on top of HBase (as normal)
- "cache" first N result rows (eg. 1M maxim)  = serialize the results on disk (eg. using temp folder, in some pre-processed "pages" structure)
- return page 1
- any subsequent request from the client should contain a flag that it's not the first page request
- <wait for the client to ask for page X of the results> and return the page X rows
- wait for a number of N minutes (configurable) -> discard the results from disk (or any N minutes have a cleanup task to delete older cached results)




2017-05-18 22:02 GMT+02:00 James Taylor <jamestaylor@apache.org>:
HBase does not lend itself to that pattern. Rows overlap in HFiles (by design). There's no facility to jump to the Nth row. Best to use the RVC mechanism.

On Thu, May 18, 2017 at 12:03 PM Ciureanu Constantin <ciureanu.constantin@gmail.com> wrote:
What about using the VLH pattern?
An d keep the offsets for each page in the server side, for a while... (the client might not need all of them, might also never ask for next page)

On May 18, 2017 20:02, "James Taylor" <jamestaylor@apache.org> wrote:
Yes, it's expected that query performance would degrade as the offset increases. The best Phoenix can do for OFFSET is to scan the rows and count them until the offset is reached. Use the row value constructor technique instead to prevent this: https://phoenix.apache.org/paged.html

On Thu, May 18, 2017 at 6:18 AM, Sumanta Gh <sumanta.gh@tcs.com> wrote:
Thanks Rafa.
This is working perfectly fine with 4.10.
Are the offset and limit enforced on client side? I find the performance of query gradually degrades as I increase the offset value.


Regards
Sumanta 


-----rafa <rafa13@gmail.com> wrote: -----
To: user@phoenix.apache.org
From: rafa <rafa13@gmail.com>
Date: 05/18/2017 06:15PM
Subject: Re: pagination


Hi Sumanta,

It is supported from 4.8:

Apache Phoenix enables OLTP and operational analytics for Hadoop through
SQL support and integration with other projects in the ecosystem such as
Spark, HBase, Pig, Flume, MapReduce and Hive.

We're pleased to announce our 4.8.0 release which includes:
- Local Index improvements[1]
- Integration with hive[2]
- Namespace mapping support[3]
- VIEW enhancements[4]
- Offset support for paged queries[5]
- 130+ Bugs resolved[6]
- HBase v1.2 is also supported ( with continued support for v1.1, v1.0 &
v0.98)
- Many performance enhancements(related to StatsCache, distinct, Serial
query with Stats etc)[6]

The release is available in source or binary form here [7].

Release artifacts are signed with the following key:
*https://people.apache.org/keys/committer/ankit.asc
<https://people.apache.org/keys/committer/ankit.asc>*

Thanks,
The Apache Phoenix Team

[1] https://issues.apache.org/jira/browse/PHOENIX-1734
[2] https://issues.apache.org/jira/browse/PHOENIX-2743
[3] https://issues.apache.org/jira/browse/PHOENIX-1311
[4] https://issues.apache.org/jira/browse/PHOENIX-1508
[5] https://issues.apache.org/jira/browse/PHOENIX-2722
[6] *https://issues.apache.org/jira/secure/ReleaseNote.jspa?version=12334393&projectId=12315120
<https://issues.apache.org/jira/secure/ReleaseNote.jspa?version=12334393&projectId=12315120>*
[7] https://phoenix.apache.org/download.html

Regards,
rafa


On Thu, May 18, 2017 at 2:04 PM, rafa <rafa13@gmail.com> wrote:
Ups...sorry my mistake. The Jira is for limit - offset with orderby. sorry.

On Thu, May 18, 2017 at 2:02 PM, rafa <rafa13@gmail.com> wrote:
Hi Sumanta,

I think it is not supported yet:

https://issues.apache.org/jira/browse/PHOENIX-3353

Best regards,
rafa

On Thu, May 18, 2017 at 1:52 PM, Sumanta Gh <sumanta.gh@tcs.com> wrote:
Hi,
From which version of Phoenix LIMIT-OFFSET based pagination is supported? I am using 4.7, but not able to use OFFSET.

Regards
Sumanta 

=====-----=====-----=====
Notice: The information contained in this e-mail
message and/or attachments to it may contain
confidential or privileged information. If you are
not the intended recipient, any dissemination, use,
review, distribution, printing or copying of the
information contained in this e-mail message
and/or attachments to it are strictly prohibited. If
you have received this communication in error,
please notify us by reply e-mail or telephone and
immediately and permanently delete the message
and any attachments. Thank you