kafka-users mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Damian Guy <damian....@gmail.com>
Subject New Consumer & committed offsets
Date Tue, 15 Sep 2015 13:07:19 GMT

I've been trying out the new consumer and have noticed that i get duplicate
messages when i stop the consumer and then restart (different processes,
same consumer group).

I consume all of the messages on the topic and commit the offsets for each
partition and stop the consumer. On the next run i expect to get 0
messages, however i get a batch of records from each partition - in this
case works out 1020 messages. Run it again and i get the same batch of

My logging shows that i've received messages with offsets lower than were
previously committed.


min offsets received: {damian_test_one-0=138824, damian_test_one-1=137321,

I've debugged the initial fetch requests for offsets and the offsets match
up with what has been committed. Is this expected behaviour? Something to
do with batching of compression of message sets?


  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message