kafka-users mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Guozhang Wang <wangg...@gmail.com>
Subject Re: kafka consumer fail over
Date Fri, 01 Aug 2014 23:45:56 GMT
Hello Weide,

That should be doable via high-level consumer, you can take a look at this
page:

https://cwiki.apache.org/confluence/display/KAFKA/Consumer+Group+Example

Guozhang


On Fri, Aug 1, 2014 at 3:20 PM, Weide Zhang <weoccc@gmail.com> wrote:

> Hi,
>
> I have a use case for a master slave  cluster where the logic inside master
> need to consume data from kafka and publish some aggregated data to kafka
> again. When master dies, slave need to take the latest committed offset
> from master and continue consuming the data from kafka and doing the push.
>
> My questions is what will be easiest kafka consumer design for this
> scenario to work ? I was thinking about using simpleconsumer and doing
> manual consumer offset syncing between master and slave. That seems to
> solve the problem but I was wondering if it can be achieved by using high
> level consumer client ?
>
> Thanks,
>
> Weide
>



-- 
-- Guozhang

Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message