kafka-users mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Niklas Lönn <niklas.l...@gmail.com>
Subject Why is segment.ms=10m for repartition topics in KafkaStreams?
Date Tue, 09 Oct 2018 11:07:30 GMT

Recently we experienced a problem when resetting a streams application,
doing quite a lot of operations based on 2 compacted source topics, with 20

We crashed entire broker cluster with TooManyOpenFiles exception (We have a
multi million limit already)

When inspecting the internal topics configuration I noticed that the
repartition topics have a default config of:

My source topic is a compacted topic used as a KTable, and lets assume I
have data for every segment of 10min, I would quickly get 1.440 segments
per partition per day.

Since this repartition topic is not even compacted, I cant understand the
reasoning behind having a default of 10min segment.ms and 50mb

Is there any best process regarding this? Potentially we could crash the
cluster every-time we need to reset an application.

And does it make sense that it would keep so many open files at the same
time in the first place? Could it be a bug in file management of the Kafka

Kind regards

  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message