kafka-users mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Ertuğrul Yılmaz <ertugrul.yil...@convertale.com>
Subject Re: Unable to scale kafka cluster
Date Sat, 24 May 2014 10:52:04 GMT
Hi,

I know replica partition tools. I try to add new partition and assign to a
new broker during producer test run. but i can't.


*scripts are;*
> bin/kafka-topics.sh --create --topic topic1 --partitions 2
--replication-factor 1 --zookeeper zookeeper:2184 zookeeper:2185
zookeeper:2186

> cat topics-to-move.json
{"topics":[{"topic": "topic1"}],"version":1}

> bin/kafka-reassign-partitions.sh --topics-to-move-json-file
topics-to-move.json --broker-list "2" --generate --zookeeper zookeeper:2184
zookeeper:2185 zookeeper:2186

> cat reassignment-json-file.json
{"version":1,"partitions":[{"topic":"topic1","partition":1,"replicas":[2]}]}

> bin/kafka-reassign-partitions.sh --reassignment-json-file
reassignment-json-file.json --execute --zookeeper zookeeper:2184
zookeeper:2185 zookeeper:2186





On Thu, May 22, 2014 at 5:34 PM, Jun Rao <junrao@gmail.com> wrote:

> Have you looked at
>
> https://cwiki.apache.org/confluence/display/KAFKA/FAQ#FAQ-CanIaddnewbrokersdynamicallytoacluster
> ?
>
> Thanks,
>
> Jun
>
>
> On Thu, May 22, 2014 at 4:33 AM, Ertuğrul Yılmaz <
> ertugrul.yilmaz@convertale.com> wrote:
>
> > Hi Kafka User and Dev team,
> >
> > We want to use kafka to scale high traffic load. My plan is adding new
> > kafka instances to the cluster regarding the current load. In order to
> > achieve this, we have executed some producer tests and want to learn how
> we
> > can scale kafka cluster. Our test environments and configs are listed
> > below.
> >
> > During our tests, nMsg.sec value did not increase after adding a new
> > broker. Even decreases.
> >
> > Waiting for your comments.
> > Thanks.
> >
> > *Topic descriptions are;*
> > Topic:tt3 PartitionCount:1 ReplicationFactor:1 Configs:
> > Topic: tt3 Partition: 0 Leader: 1 Replicas: 1 Isr: 1
> >
> > Topic:tt4 PartitionCount:2 ReplicationFactor:1 Configs:
> > Topic: tt4 Partition: 0 Leader: 1 Replicas: 1 Isr: 1
> > Topic: tt4 Partition: 1 Leader: 2 Replicas: 2 Isr: 2
> >
> > *Test code scripts are;*
> > bin/kafka-producer-perf-test.sh --broker-list=server1:9092 --messages
> > 150000 --topics tt3 --threads 5 --message-size 1000 --batch-size 200
> > --initial-message-id 500
> > start.time, end.time, compression, message.size, batch.size,
> > total.data.sent.in.MB, MB.sec, total.data.sent.in.nMsg, nMsg.sec
> > 2014-05-22 08:52:04:523, 2014-05-22 08:52:30:668, 0, 1000, 200, 143.05,
> > 5.4715, 150000, 5737.2347
> >
> > bin/kafka-producer-perf-test.sh --broker-list=server1:9092,server2:9092
> > --messages 150000 --topics tt3 --threads 5 --message-size 1000
> --batch-size
> > 200 --initial-message-id 500
> > start.time, end.time, compression, message.size, batch.size,
> > total.data.sent.in.MB, MB.sec, total.data.sent.in.nMsg, nMsg.sec
> > 2014-05-22 08:50:25:753, 2014-05-22 08:50:53:584, 0, 1000, 200, 143.05,
> > 5.1400, 150000, 5389.6734
> >
> >
> > bin/kafka-producer-perf-test.sh --broker-list=server1:9092 --messages
> > 150000 --topics tt4 --threads 5 --message-size 1000 --batch-size 200
> > --initial-message-id 500
> > start.time, end.time, compression, message.size, batch.size,
> > total.data.sent.in.MB, MB.sec, total.data.sent.in.nMsg, nMsg.sec
> > 2014-05-22 08:48:18:628, 2014-05-22 08:48:47:153, 0, 1000, 200, 143.05,
> > 5.0149, 150000, 5258.5451
> >
> > bin/kafka-producer-perf-test.sh --broker-list=server1:9092,server2:9092
> > --messages 150000 --topics tt4 --threads 5 --message-size 1000
> --batch-size
> > 200 --initial-message-id 500
> > start.time, end.time, compression, message.size, batch.size,
> > total.data.sent.in.MB, MB.sec, total.data.sent.in.nMsg, nMsg.sec
> > 2014-05-22 08:46:41:443, 2014-05-22 08:47:11:770, 0, 1000, 200, 143.05,
> > 4.7170, 150000, 4946.0876
> >
> >
> > bin/kafka-producer-perf-test.sh --broker-list=server1:9092 --messages
> > 300000 --topics tt3 --threads 5 --message-size 1000 --batch-size 200
> > --initial-message-id 500
> > start.time, end.time, compression, message.size, batch.size,
> > total.data.sent.in.MB, MB.sec, total.data.sent.in.nMsg, nMsg.sec
> > 2014-05-22 08:42:03:119, 2014-05-22 08:42:45:885, 0, 1000, 200, 286.10,
> > 6.6899, 300000, 7014.9184
> >
> > bin/kafka-producer-perf-test.sh --broker-list=server1:9092,server2:9092
> > --messages 300000 --topics tt3 --threads 5 --message-size 1000
> --batch-size
> > 200 --initial-message-id 500
> > start.time, end.time, compression, message.size, batch.size,
> > total.data.sent.in.MB, MB.sec, total.data.sent.in.nMsg, nMsg.sec
> > 2014-05-22 08:33:20:526, 2014-05-22 08:34:03:131, 0, 1000, 200, 286.10,
> > 6.7152, 300000, 7041.4271
> >
> >
> > bin/kafka-producer-perf-test.sh --broker-list=server1:9092 --messages
> > 300000 --topics tt4 --threads 5 --message-size 1000 --batch-size 200
> > --initial-message-id 500
> > start.time, end.time, compression, message.size, batch.size,
> > total.data.sent.in.MB, MB.sec, total.data.sent.in.nMsg, nMsg.sec
> > 2014-05-22 08:43:41:984, 2014-05-22 08:44:23:987, 0, 1000, 200, 286.10,
> > 6.8115, 300000, 7142.3470
> >
> > bin/kafka-producer-perf-test.sh --broker-list=server1:9092,server2:9092
> > --messages 300000 --topics tt4 --threads 5 --message-size 1000
> --batch-size
> > 200 --initial-message-id 500
> > start.time, end.time, compression, message.size, batch.size,
> > total.data.sent.in.MB, MB.sec, total.data.sent.in.nMsg, nMsg.sec
> > 2014-05-22 08:44:47:356, 2014-05-22 08:45:29:058, 0, 1000, 200, 286.10,
> > 6.8606, 300000, 7193.8996
> >
> >
> > *machine instance types*
> > zookeeper amazon ec2 t1.micro
> > kafka broker1   amazon ec2 m1.small
> > kafka broker2   amazon ec2 m1.small
> > producer amazon ec2 m3.medium
> >
> >
> > *zookeeper1 conf*
> > tickTime=2000
> > initLimit=10
> > syncLimit=5
> > clientPort=2184
> > server.1=localhost:2888:3888
> > server.2= localhost:2889:3889
> > server.3= localhost:2890:3890
> > kafka broker 1 server.properties
> > broker.id=1
> > port=9092
> > host.name=server1
> > num.network.threads=8
> > num.io.threads=8
> > num.replica.fetchers=4
> > replica.fetch.max.bytes=1048576
> > replica.fetch.wait.max.ms=500
> > replica.high.watermark.checkpoint.interval.ms=5000
> > replica.socket.timeout.ms=30000
> > replica.socket.receive.buffer.bytes=65536
> > replica.lag.time.max.ms=10000
> > replica.lag.max.messages=4000
> > controller.socket.timeout.ms=30000
> > controller.message.queue.size=10
> > socket.send.buffer.bytes=1048576
> > socket.receive.buffer.bytes=1048576
> > socket.request.max.bytes=104857600
> > queued.max.requests=16
> > fetch.purgatory.purge.interval.requests=100
> > producer.purgatory.purge.interval.requests=100
> > log.dirs=/opt/kafka-logs
> > num.partitions=8
> > sage.max.bytes=1000000
> > auto.create.topics.enable=true
> > log.index.interval.bytes=4096
> > log.index.size.max.bytes=10485760
> > log.retention.hours=168
> > log.flush.interval.ms=10000
> > log.flush.interval.messages=20000
> > log.flush.scheduler.interval.ms=2000
> > log.roll.hours=168
> > log.cleanup.interval.mins=30
> > log.segment.bytes=1073741824
> > log.retention.check.interval.ms=60000
> > log.cleaner.enable=false
> > zookeeper.connect=zookeeper:2184,zookeeper:2185,zookeeper:2186
> > zookeeper.connection.timeout.ms=1000000
> > zookeeper.sync.time.ms=2000
> >
> >
> >
> > --
> >
> > Saygılarımla
> >
>



-- 

Saygılarımla

Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message