kafka-users mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Liam Clarke <liam.cla...@adscale.co.nz>
Subject Re: Configuration of log compaction
Date Mon, 17 Dec 2018 22:06:29 GMT
Hi Claudia,

Anything useful in the log cleaner log files?

Cheers,

Liam Clarke

On Tue, 18 Dec. 2018, 3:18 am Claudia Wegmann <c.wegmann@kasasi.de wrote:

> Hi,
>
> thanks for the quick response.
>
> My problem is not, that no new segments are created, but that segments
> with old data do not get compacted.
> I had to restart one broker because there was no diskspace left. After
> recreating all indexes etc. the broker recognized the old data and
> compacted it correctly. I had to restart all other brokers of the cluster,
> too, for them to also recognize the old data and start compacting.
>
> So I guess, before restarting the brokers where to busy to compact/delete
> old data? Is there a configuration to ensure compaction after a certain
> amount of time or something?
>
> Best,
> Claudia
>
> -----Urspr√ľngliche Nachricht-----
> Von: Spico Florin <spicoflorin@gmail.com>
> Gesendet: Montag, 17. Dezember 2018 14:28
> An: users@kafka.apache.org
> Betreff: Re: Configuration of log compaction
>
> Hello!
>   Please check whether the segment.ms configuration on topic will help
> you to solve your problem.
>
> https://kafka.apache.org/documentation/
>
>
> https://stackoverflow.com/questions/41048041/kafka-deletes-segments-even-before-segment-size-is-reached
>
> Regards,
>  Florin
>
> segment.ms This configuration controls the period of time after which
> Kafka will force the log to roll even if the segment file isn't full to
> ensure that retention can delete or compact old data. long 604800000
> [1,...] log.roll.ms medium
>
> On Mon, Dec 17, 2018 at 12:28 PM Claudia Wegmann <c.wegmann@kasasi.de>
> wrote:
>
> > Dear kafka users,
> >
> > I've got a problem on one of my kafka clusters. I use this cluster
> > with kafka streams applications. Some of this stream apps use a kafka
> > state store. Therefore a changelog topic is created for those stores
> > with cleanup policy "compact". One of these topics is running wild for
> > some time now and seems to grow indefinitely. When I check the  log
> > file of the first segment, there is a lot of data in it, that should
> > have been compacted already.
> >
> > So I guess I did not configure everything correctly for log compaction
> > to work as expected. What config parameters do have influence on log
> > compaction? And how to set them, when I want data older than 4 hours
> > to be compacted?
> >
> > Thanks in advance.
> >
> > Best,
> > Claudia
> >
>

Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message