kafka-users mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Birla, Lokesh" <lokesh.bi...@verizon.com>
Subject Re: Kafka 0.8.1.1 Leadership changes are happening very often
Date Tue, 23 Dec 2014 22:06:40 GMT

I was already using 4GB heap memory. I even changed to 8 GB heap memory and could see leadership
changing very often. In my 5 minute run, I saw leadership changed from 1,2,3 to 3,3,3,  to
1,1,1.
Also my message rate is just: 7k and total msg count is only 2,169,001.

Does anyone has cline on leadership change?

—Lokesh



From: Thunder Stumpges <tstumpges@ntent.com<mailto:tstumpges@ntent.com>>
Date: Monday, December 22, 2014 at 6:31 PM
To: "users@kafka.apache.org<mailto:users@kafka.apache.org>" <users@kafka.apache.org<mailto:users@kafka.apache.org>>
Cc: "Birla, Lokesh" <lokesh.birla@one.verizon.com<mailto:lokesh.birla@one.verizon.com>>
Subject: RE: Kafka 0.8.1.1 eadership changes are happening very often

Did you check the GC logs in the server? We ran into this and the default setting of 1G max
heap on the broker process was nowhere near enough. We currently have set to 4G.
-T

-----Original Message-----
From: Birla, Lokesh [lokesh.birla@verizon.com<mailto:lokesh.birla@verizon.com>]
Received: Monday, 22 Dec 2014, 5:27PM
To: users@kafka.apache.org<mailto:users@kafka.apache.org> [users@kafka.apache.org<mailto:users@kafka.apache.org>]
CC: Birla, Lokesh [lokesh.birla@verizon.com<mailto:lokesh.birla@verizon.com>]
Subject: Kafka 0.8.1.1 eadership changes are happening very often

Hello,

I am running 3 brokers, one zookeeper and producer all on separate machine. I am also sending
very low load around 6K msg/sec. Each msg is around 150 bytes only.
I ran the load for only 5 minutes and during this time, I see leadership chained very often.

I created 3 partitions.

Here leadership for each partitions changed.  All 3 brokers are running perfectly fine. No
broker is down. Could someone let me know why kafka leadership changed very often.

Initially:

Topic:mmetopic1PartitionCount:3 ReplicationFactor:3 Configs:

Topic: mmetopic1Partition: 0 Leader: 2Replicas: 2,3,1 Isr: 2,3,1

Topic: mmetopic1Partition: 1 Leader: 3Replicas: 3,1,2 Isr: 3,1,2

Topic: mmetopic1Partition: 2 Leader: 1Replicas: 1,2,3 Isr: 1,2,3


Changed to:


Topic:mmetopic1PartitionCount:3 ReplicationFactor:3 Configs:

Topic: mmetopic1Partition: 0 Leader: 3Replicas: 2,3,1 Isr: 3,1,2

Topic: mmetopic1Partition: 1 Leader: 3Replicas: 3,1,2 Isr: 3,1,2

Topic: mmetopic1Partition: 2 Leader: 1Replicas: 1,2,3 Isr: 1,3,2


Changed to:


Topic:mmetopic1PartitionCount:3 ReplicationFactor:3 Configs:

Topic: mmetopic1Partition: 0 Leader: 1Replicas: 2,3,1 Isr: 1,2,3

Topic: mmetopic1Partition: 1 Leader: 1Replicas: 3,1,2 Isr: 1,2,3

Topic: mmetopic1Partition: 2 Leader: 2Replicas: 1,2,3 Isr: 2,1,3

Changed to:


Topic:mmetopic1PartitionCount:3 ReplicationFactor:3 Configs:

Topic: mmetopic1Partition: 0 Leader: 3Replicas: 2,3,1 Isr: 3,1,2

Topic: mmetopic1Partition: 1 Leader: 3Replicas: 3,1,2 Isr: 3,1,2

Topic: mmetopic1Partition: 2 Leader: 1Replicas: 1,2,3 Isr: 1,3,2


Thanks,
Lokesh

Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message