kafka-users mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Ewen Cheslack-Postava <e...@confluent.io>
Subject Re: Behaviour of KafkaConsumer.poll(long)
Date Tue, 26 Jan 2016 21:14:34 GMT
It's not an iterator (ConsumerRecords is a collection of records), but you
also won't just get the entire set of messages all at once. You would have
the same issue if you set auto.offset.reset to earliest for a new consumer
-- everything that's in the topic will need to be consumed.

Under the hood, the client makes requests to the brokers to get data to
return when poll() is called. These requests include a limit on the amount
of data to be returned (which you can control with the
max.partition.fetch.bytes setting). The consumer will start fetching more
data only once the previous data has been returned by poll().


On Mon, Jan 25, 2016 at 9:53 AM, Peter Schrott <
peter.schrott89@googlemail.com> wrote:

> Hi Kafka-Gurus,
> Using Kafka in one of my projects, the question arose, how the records are
> provided using KafkaCosumer.poll(long). Is the entire map of records copied
> into the clients memory, or does poll(..) work on an iterator-based model?
> I am asking this, as I face the following scenario: The consumer client is
> down, while the producer still write records to the Kafka topic. If the
> client restarts, could the amount of messages received by the first
> poll-call simply blow the JVM?
> Thanks for the answer in advance, Peter


  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message