flink-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From miki haiat <miko5...@gmail.com>
Subject Re: data enrichment with SQL use case
Date Mon, 16 Apr 2018 18:19:47 GMT
HI thanks  for the reply  i will try to break your reply to the flow
execution order .

First data stream Will use AsyncIO and select the table ,
Second stream will be kafka and the i can join the stream and map it ?

If that   the case  then i will select the table only once on load ?
How can i make sure that my stream table is "fresh" .

Im thinking to myself , is thire a way to use flink backend (ROKSDB)  and
create read/write through
macanisem ?

Thanks

miki



On Mon, Apr 16, 2018 at 2:45 AM, Ken Krugler <kkrugler_lists@transpac.com>
wrote:

> If the SQL data is all (or mostly all) needed to join against the data
> from Kafka, then I might try a regular join.
>
> Otherwise it sounds like you want to use an AsyncFunction to do ad hoc
> queries (in parallel) against your SQL DB.
>
> https://ci.apache.org/projects/flink/flink-docs-release-1.4/dev/stream/
> operators/asyncio.html
>
> — Ken
>
>
> On Apr 15, 2018, at 12:15 PM, miki haiat <miko5054@gmail.com> wrote:
>
> Hi,
>
> I have a case of meta data enrichment and im wondering if my approach is
> the correct way .
>
>    1. input stream from kafka.
>    2. MD in msSQL .
>    3. map to new pojo
>
> I need to extract  a key from the kafka stream   and use it to select some
> values from the sql table  .
>
> SO i thought  to use  the table SQL api in order to select the table MD
> then convert the kafka stream to table and join the data by  the stream
> key .
>
> At the end i need to map the joined data to a new POJO and send it to
> elesticserch .
>
> Any suggestions or different ways to solve this use case ?
>
> thanks,
> Miki
>
>
>
>
> --------------------------
> Ken Krugler
> http://www.scaleunlimited.com
> custom big data solutions & training
> Hadoop, Cascading, Cassandra & Solr
>
>

Mime
View raw message