kafka-users mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Sandon Jacobs <sjac...@appia.com>
Subject Consumer Multi-Fetch
Date Thu, 06 Mar 2014 02:27:34 GMT
I understand replication uses a multi-fetch concept to maintain the replicas of each partition.
I have a use case where it might be beneficial to grab a “batch” of messages from a kafka
topic and process them as one unit into a source system – in my use case, sending the messages
to a Flume source.

My questions:

  *   Is it possible to fetch a back of messages in which you may not know the exact message
size?
  *   If so, how are the offsets managed?

I am trying to avoid queuing them in memory and batching in my process for several reasons.

Thanks in advance…

Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message