kafka-users mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Ben Myles <bmy...@salesforce.com>
Subject Re: Unexpected 0.9 Consumer Behavior
Date Wed, 06 Jan 2016 00:24:27 GMT
Thanks for the clear and concise explanation Jason, makes sense now!

On Tue, Jan 5, 2016 at 3:42 PM, Jason Gustafson <jason@confluent.io> wrote:
> Hi Ben,
> The new consumer is single-threaded, so each instance should be given a
> dedicated thread. Using multiple consumers in the same thread won't really
> work as expected because poll() blocks while the group is rebalancing. If
> both consumers aren't actively call poll(), then they won't be both be able
> to rejoin the group when a rebalance is needed. So what happens is this:
> c1.poll(1000); // c1 successfully joins the group
> c2.poll(1000); // c2 tries to join, forcing a rebalance. c1 can't rejoin
> because we are blocked in this call, so c1 is kicked out and c2 is the only
> member in the group
> c1.poll(1000); // same thing. c2 is kicked out and c1 is back in the group
> Does that make sense? Try modifying your code to have each instance call
> poll() in a separate thread.
> -Jason
> On Tue, Jan 5, 2016 at 3:07 PM, Ben Myles <bmyles@salesforce.com> wrote:
>> Hi,
>> Wondering if anyone can provide some insight into some unexpected
>> behavior we're seeing with the 0.9 consumer:
>> 1) We create two consumer instances, each with the same group.id and
>> subscribe them to the same topic.
>> 2) We would expect each consumer to be assigned 1/2 the topic
>> partitions, but they both end up being assigned *all* the partitions.
>> 3) The consumers constantly rebalance, which leads to a
>> "CommitFailedException: Commit cannot be completed due to group
>> rebalance".
>> Here's the simple code to replicate:
>> https://gist.github.com/benmyles/a275ffeccc64442a836a
>> If we box each consumer inside a separate thread the problem goes
>> away, however we'd like to understand why it doesn't work as-is. I
>> understand consumers are not thread-safe, however we're not doing any
>> concurrent access here, everything is sequential.
>> Thanks,
>> Ben Myles

View raw message