calcite-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Vladimir Sitnikov <sitnikov.vladi...@gmail.com>
Subject Re: Filter push
Date Tue, 07 Oct 2014 17:21:45 GMT
Dan,

>As always, a good example helps

Did you succeed with workable "select * from rocksdb_table"?
Can you share your code so conversation can become more specific?

The calcite.debug code that you've posted recently has no rocksdb calls,
thus it looks wrong.

>Do you think this would make more sense to follow in the footsteps of the
>spark model, since it's more about generating code that is run via spark
>RDD's vs translating queries from one language to another (in the case of
>Mongo/splunk)?

Mongo/spark have their own query languages, thus those adapters "translating
queries from one language to another" stuff to push more
conditions/expressions to the database engine.

As far as I understand, rocksdb speaks just java (there is no such thing as
rocksdb-language), thus I would suggest going with "translate to java calls
(rocksdb API)" approach.

You should have some good kind of aim.
"push down filters to rocksdb" is a wrong aim. Well, it might be a good aim
if you are Julian and you know what you are doing, but it does not seem to
be the case.
"make Calcite use rocks.get() api to fetch row by key given in this kind of
SQL" is a good one.
"display all rows from rocksdb as a table" is also a good aim.

The easiest approach from my point of view, is to use Calcite as an
intermediate framework that translates SQL to _appropriate_ calls of your
storage engine (see Julians approach earlier in this thread).
Calcite can glue together the iterations and fill in missing parts. For
instance, you can have "group by" implemented for free.

Does that make sense?

--
Vladimir

Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message