kafka-users mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Robert Quinlivan <rquinli...@signal.co>
Subject Issue with "stuck" consumer in 0.9 broker
Date Sat, 17 Dec 2016 15:24:41 GMT
I am running an 0.9 broker and I'm having trouble viewing and committing
offsets. Upon starting up the broker, I see the following in the kafka.out
log:

[2016-12-17 14:56:14,389] WARN Connected to an old server; r-o mode will be
unavailable (org.apache.zookeeper.ClientCnxnSocket)

I have one client consumer. I am using the new consumer. The
consumer-groups tool reports this:

$ ./kafka-consumer-groups.sh --bootstrap-server localhost:9092 --list
--new-consumer
my_application

However, it throws an exception when checking the position of the
my_application group:

$ ./kafka-consumer-groups.sh --bootstrap-server localhost:9092
--new-consumer --describe --group my_application
Consumer group `my_application` does not exist or is rebalancing.

My suspicion is that the broker is stuck in a rebalance. Occasionally, I
see the following in my client logs:

INFO  2016-12-17 09:06:05,410
org.apache.kafka.clients.consumer.internals.AbstractCoordinator - Attempt
to heart beat failed since the group is rebalancing, try to re-join group.
INFO  2016-12-17 09:06:05,430
org.apache.kafka.clients.consumer.internals.Fetcher - Fetch offset 2013 is
out of range, resetting offset
INFO  2016-12-17 09:06:05,430
org.apache.kafka.clients.consumer.internals.Fetcher - Fetch offset 30280 is
out of range, resetting offset

Which I'm guessing is the consumer timing out and attempting to rejoin, but
failing to do so.

If it's relevant, my server properties file looks like this:

broker.id=0
auto.create.topics.enable=true
group.max.session.timeout.ms=300000
default.replication.factor=1
offsets.topic.replication.factor=1
compression.type=snappy
num.network.threads=2
num.io.threads=2
socket.send.buffer.bytes=1048576
socket.receive.buffer.bytes=1048576
accept (protection against OOM)
socket.request.max.bytes=104857600
log.dirs=/mnt/kafka-logs
num.partitions=1
delete.topic.enable=true
log.flush.interval.messages=10000
log.flush.interval.ms=1000
log.retention.minutes=1
offsets.retention.minutes=86400
log.segment.bytes=486870912
log.retention.check.interval.ms=60000
zookeeper.connect=localhost:2181/my_chroot
zookeeper.connection.timeout.ms=1000000
inter.broker.protocol.version=0.9.0.1
port=9092
offsets.topic.compression.codec=2


I'm having trouble understanding what is broken without any further
information logged from the brokers. Is there a switch in the broker config
that can provide more verbose logging, or is there another way of checking
the offsets?

Thank you
-- 
Robert Quinlivan
Software Engineer, Signal

Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message