calcite-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Dan Di Spaltro <dan.dispal...@gmail.com>
Subject Re: Filter push
Date Thu, 02 Oct 2014 17:17:59 GMT
Thanks for the response, this is super helpful. Between when I sent
the message and when you responded I started looking at the mongo
adapter since I read the blog post about it, I figured it was the
newer way of doing things and it seems much more straight-forward.
Examples can teach me alot and I think the biggest disparity between
all the examples and what I am trying to do is that all the complex
filtering examples are translating a string query to a string query
(is this why this question is relevant [1]?) For instance in rocksdb
everything besides the primary key is a table scan [2].  And it works
like a cursor, you just iterate over the values.  Ideally during that
iteration you could apply the simple filtering.

 I have some more questions inline.

[1] http://mail-archives.apache.org/mod_mbox/incubator-optiq-dev/201409.mbox/%3CCANQjSRNDKkRgqW839-0zpjhHW_hExWxEXA%2B8mCxO8-a2nRX1oA%40mail.gmail.com%3E
[2] https://github.com/facebook/rocksdb/wiki/Basic-Operations#iteration
[3] https://github.com/apache/incubator-optiq/blob/90f0bead8923dfb28992b60baee8d8cb92c18d9e/mongodb/src/main/java/net/hydromatic/optiq/impl/mongodb/MongoRules.java#L218

On Thu, Oct 2, 2014 at 12:31 AM, Julian Hyde <julian@hydromatic.net> wrote:
> Dan,
>
> First, can I clarify query semantics. SQL is a strongly-typed language, but there are
a couple of ways you can make it work on a “schema-less” or “schema-on-read” database.
The Splunk adapter does it one way (the columns you ask for effectively become the schema
for the duration of that query) and Drill working on JSON documents does it another way (you
get back a record with a single column whose value is a map, and then you can probe into that
map for values). I guess the former is what people call a key-value database, the latter a
document database.

This makes sense and is how I think about it too, I created a simple
mapping to do schema on read.

>
> The next question is what should be your basic table-scan operator. Let’s assume that
you want to pass in a list of columns to project, plus a boolean expression like ‘c1 >
10 and c1 < 20 and c2 = 4’ for the conditions you want to be executed in the table scan.
(Not sure exactly what expressions rocksdb can handle, but you should start simple.)

Like I mentioned above this is where I am getting tripped up, since
it's such a basic datastore, I am having a hard time grokking how to
express that.

I was thinking of using janino to compile to a java expression and
passing that to the iteration engine, but that is going to take some
time.

>
> I think I complicated things by trying to pack too much functionality into SplunkTableAccessRel.
Here’s how I would do it better and simpler for your RocksDB adapter. (And by the way, the
MongoDB adapter works more like this.)
>
> I’d write a RocksTableScan extends TableAccessRelBase. Also write a RocksProjectRel,
whose expressions are only allowed to be RexInputRefs (i.e. single columns), and a RocksFilterRel,
which is only allowed to do simple operations on PK columns. In other words, you write RocksDB
equivalents of the relational operators scan, project, filter, that do no more than — often
a lot less than — their logical counterparts. The mistake in the Splunk adapter was giving
a “table scan” operator too many responsibilities.
>
> Create a RocksConvention, a RockRel interface, and some rules:
>
>  RocksProjectRule: ProjectRel on a RocksRel ==> RocksProjectRel
>  RocksFilterRule: FilterRel on RocksRel ==> RocksFilterRel

As an example thats what's this is conveying right [3]?

>
> RocksProjectRule would only push down column references; it might need to create a ProjectRel
above to handle expressions that cannot be pushed down. Similarly RocksFilterRule would only
push down simple conditions.
>
> Fire those rules, together with the usual rules to push down filters and projects, and
push filters through projects, and you will end up with a plan with
>
> RocksToEnumerableConverter
>   RocksProject
>     RocksFilter
>       RocksScan

Yeah after looking at the code this is where I am at.

>
> at the bottom (RocksProject and RocksFilter may or may not be present). When you call
the RocksToEnumerableConverter.implement method, it will gather together the project, filter
and scan and make a single call to RocksDB, and generate code for an enumerable. The rest
of the query will be left behind, above the RocksToEnumerableConverter, and also get implemented
using code-generation.
>
> ArrayTable would be useful if you want to cache data sets in memory. As always with caching,
I’d suggest you skip it in version 1.

I wasn't sure if I could subclass it and use the interesting bits
since rdb deals with array of bytes, but since serialization isn't
what I am confused on Ill skip this question.

>
> Sounds like an interesting project. You ask really smart questions, so I’d be very
happy to help further. And when you have something, please push it to github so we can all
see it.

Yeah I will try to make something public, thanks so much for the help.

>
> Julian
>
>
> On Oct 1, 2014, at 12:57 AM, Dan Di Spaltro <dan.dispaltro@gmail.com> wrote:
>
>> First off, this project is awesome.  Great in code documentation.
>>
>> I am trying to build a sql frontend for rocksdb.  The general idea is
>> to iterate over a single key/value pairs and build them up to a map, 1
>> layer deep.
>>
>> foo\0bar = v1
>> foo\0baz = v2
>> f2\0bar = v3
>> f2\0baz = v4
>>
>>
>>        bar     baz
>> foo  v1    v2
>> f2   v3    v4
>>
>> So I started looking at the Splunk code since it seems like middle of
>> the road complexity with projection (unknown columns at metadata time)
>> and filter push-down (via the search query).  The spark example seemed
>> overly complex and the csv example doesn't have anything but
>> projection (which is easy to grasp).  Here are some of my specific
>> trouble areas:
>>
>> #1 "primary key". with specific columns, I'd like pass them down to
>> the db engine to filter.  So I've set up the structure very similar to
>> the Splunk example, both with projections, filters and filter on
>> projections and vice versa.  Is there a good pattern to do this
>> basically to pass all the stuff I need to push down to the query
>> layer?  If it's not a pk how do I let the in-memory system do the
>> filtering?
>>
>> #2 "alchemy".  There is a lot of alchemy in [1], is it complex because
>> you're overloading a single class with multiple functions? Any good
>> ideas where I'd learn the top vs bottom projections. That's probably a
>> tough question, since I am pretty much a newb at query planning/sql
>> optimizers.
>>
>> #3 "array table". Would that be useful in this situation?
>>
>> [1] https://github.com/apache/incubator-optiq/blob/master/splunk/src/main/java/net/hydromatic/optiq/impl/splunk/SplunkPushDownRule.java#L99
>>
>> This is really neat stuff,
>>
>> -Dan
>>
>> --
>> Dan Di Spaltro
>



-- 
Dan Di Spaltro

Mime
View raw message