kafka-users mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Guozhang Wang <wangg...@gmail.com>
Subject Re: How to prevent data loss in "read-process-write" application?
Date Mon, 03 Jun 2019 21:19:32 GMT

Transactional messaging is actually designed to solve this scenario exactly
(pun intended :). Although in your app you may not have stateful logic, it
is still necessary to enable transactional messaging if you are using
consumer/producer, or just enable EOS if you are using streams.


On Sun, Jun 2, 2019 at 8:22 PM 1095193290@qq.com <1095193290@qq.com> wrote:

> Hi
>    I have a application consume from Kafka, process and send to Kafka. In
> order to prevent data loss, I need to commit consumer offset after
> committing a batch of  messages to Kafka successfully. I investigate
> Transaction fearture that provided atomic writes to multiple partitions
> could  solve my problem. Has any other recommended solution in addition to
> enable Transcation( I dont need exactly once process)?
> 1095193290@qq.com

-- Guozhang

  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message