kafka-users mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Jun MA <mj.saber1...@gmail.com>
Subject Re: kafka 0.9 offset unknown after cleanup
Date Wed, 04 May 2016 02:45:16 GMT
I think I figured that out. Based on the explanation on http://www.slideshare.net/jjkoshy/offset-management-in-kafka
<http://www.slideshare.net/jjkoshy/offset-management-in-kafka>, offsets.retention.minutes
is used for clean up dead consumer group. It means if one consumer group hasn’t commit any
offset for offsets.retention.minutes, kafka will clean up its offset, which make sense for
my case.

Thanks,
Jun
> On May 3, 2016, at 11:46 AM, Jun MA <mj.saber1990@gmail.com> wrote:
> 
> Thanks for your reply. I checked the offset topic and the cleanup policy is actually
compact.
> 
> Topic:__consumer_offsets	PartitionCount:50	ReplicationFactor:3 Configs:segment.bytes=104857600,cleanup.policy=compact,compression.type=uncompressed
> 
> And I’m using 0.9.0.1 so the default config for log.cleaner.enable is true. 
> In this case theoretically I should not lose any offset, right?
> I noticed that offsets.retention.minutes by default is 24hours, which is correlated to
my offset becomes unknown, so I’m wondering if that is because of this config? Can someone
explain more about this config(description: Log retention window for offsets topic) because
I don’t understand what this does?
> 
> Another related question is that how can I actually see the data of offset topic? I think
by reading the actual data might help.
> 
> Thanks,
> Jun
> 
>> On May 3, 2016, at 1:15 AM, Gerard Klijs <gerard.klijs@dizzit.com <mailto:gerard.klijs@dizzit.com>>
wrote:
>> 
>> Looks like it, you need to be sure the offset topic is using compaction,
>> and the broker is set to enable compaction.
>> 
>> On Tue, May 3, 2016 at 9:56 AM Jun MA <mj.saber1990@gmail.com <mailto:mj.saber1990@gmail.com>>
wrote:
>> 
>>> Hi,
>>> 
>>> I’m using 0.9.0.1 new-consumer api. I noticed that after kafka cleans up
>>> all old log segments(reach delete.retention time), I got unknown offset.
>>> 
>>> bin/kafka-consumer-groups.sh --bootstrap-server server:9092 --new-consumer
>>> --group testGroup --describe
>>> GROUP, TOPIC, PARTITION, CURRENT OFFSET, LOG END OFFSET, LAG, OWNER
>>> testGroup, test, 0, unknown, 49, unknown, consumer-1_/10.32.241.2
>>> testGroup, test, 1, unknown, 61, unknown, consumer-1_/10.32.241.2
>>> 
>>> In this situation, I cannot consume anything using new-consumer java
>>> driver if I disable auto-commit.
>>> I think this happens because new-consumer driver stores offset in broker
>>> as a topic(not in zookeeper), and after reaching delete.retention time, it
>>> got deleted and becomes unknown. And since I disabled auto-commit, it can
>>> never know where it is, then it cannot consume anything.
>>> 
>>> Is this what happened here? What should I do in this situation?
>>> 
>>> Thanks,
>>> Jun
> 


Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message