kafka-users mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Rahul Singh <rahul.xavier.si...@gmail.com>
Subject Re: kafka cluster size planning
Date Fri, 21 Dec 2018 07:48:05 GMT
Why do you need that many partitions or topics — what’s the business use case.

Rahul Singh
Chief Executive Officer
m 202.905.2818

Anant Corporation
1010 Wisconsin Ave NW, Suite 250
Washington, D.C. 20007

We build and manage digital business technology platforms.
On Dec 10, 2018, 12:09 PM -0500, imamba <imamba_kafka@163.com>, wrote:
> Hi, there,
> We will deploy a large-scale kafka cluster using kafka 2.0.0 in production environment
recently, as JunRao recommend in cluster-limit lately, to accommodate for the rare event of
a hard failure of the controller, it is better to limit each broker to have up to 4,000 partitions
and each cluster to have up to 200,000 partitions. We plan to possess 2k topics each with
4k partitions and two-replicas, it is clearly reaching the cluster-wide limit recommended
in above blog. Can such a large-scale kafka cluster meet our production demand well?
> ps: broker hardware configuration described as follows:
> 24 cores
> 256G mem
> 10Gbps nic
> 22*4T sata disk
> Any advise/guidance would be greatly appreciated!
> Thanks!

  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message