kafka-users mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Joe Stein <joe.st...@stealth.ly>
Subject Re: Elastsic Scaling
Date Fri, 21 Nov 2014 04:44:48 GMT
If you plan ahead of time with enough partitions then you won't fall into
an issue of backed up consumers when you scale them up.

If you have 100 partitions 20 consumers can read from them (each could read
from 5 partitions). You can scale up to 100 consumers (one for each
partition) as the upper limit. If you need more than that you should have
had more than 100 partitions to start. Scaling down can go to 1 consumer if
you wanted as 1 consumer can read from N partitions.

If you are using the JVM you can look at
are other options in other languages and in the JVM too

At the end of the day the Kafka broker will not impose any limitations for
what you are asking currently (as per the wire protocol
) it is all about how the consumer is designed and developed.

 Joe Stein
 Founder, Principal Consultant
 Big Data Open Source Security LLC
 Twitter: @allthingshadoop <http://www.twitter.com/allthingshadoop>

On Thu, Nov 20, 2014 at 3:18 PM, Sybrandy, Casey <
Casey.Sybrandy@six3systems.com> wrote:

> Hello,
> We're looking into using Kafka for a improved version of a system and the
> question of how to scale Kafka came up.  Specifically, we want to try to
> make the system scale as transparently as possible.  The concern was that
> if we go from N to N*2 consumers that we would have some that are still
> backed up while the new ones were working on only some of the new records.
> Also, if the load drops, can we scale down effectively?
> I'm sure there's a way to do it.  I'm just hoping that someone has some
> knowledge in this area.
> Thanks.

  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message