kafka-users mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Guozhang Wang <wangg...@gmail.com>
Subject Re: use case with high rate of duplicate messages
Date Tue, 01 Oct 2013 15:32:54 GMT
Batch processing will increase the throughput but also increase latency,
how large latency your real-time processing can tolerate?

One thing you could try is to use the keyed messages, with key as the md5
hash of your message. Kafka has a deduplication mechanism on the brokers
that dedup messages with the same key. All you need to do is setting the
dedup frequency appropriately for your use case.

Guozhang


On Tue, Oct 1, 2013 at 8:19 AM, S Ahmed <sahmed1020@gmail.com> wrote:

> I have a use case where thousands of servers send status type messages,
> which I am currently handling real-time w/o any kind of queueing system.
>
> So currently when I receive a message, and perform a md5 hash of the
> message, perform a lookup in my database to see if this is a duplicate, if
> not, I store the message.
>
> Now the message format can be either xml or json, and the actual parsing of
> the message takes time so I would am thinking of storing all the messages
> in kafka first and then batch processing these messages in hopes that this
> will be faster to do.
>
> Do you think there would be a faster way of recognizing duplicate messages
> this way or its just the same problem but doing it on a batch level?
>



-- 
-- Guozhang

Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message