kafka-users mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Aparna Chaudhary <aparna.chaudh...@gmail.com>
Subject Understanding relation of Large Messages with Kafka Broker JVM GC
Date Thu, 04 Jul 2019 19:52:24 GMT

I'm trying to understand how Kafka Broker memory is impacted and leads to
more JVM GC when Large messages are sent to Kafka.

*Large messages can cause longer garbage collection (GC) pauses as brokers
allocate large chunks.*

Kafka is zero-copy; so messages do not *pass-through *JVM heap; implying no
usage of HeapByteBuffer.

My reasoning is: if there is no enough virtual memory available to allocate
buffer, it will trigger JVM GC (even when sufficient heap space is
available). So JVM GC behavior is a factor of amount of memory available to
Kafka broker (and max message size and number of partitions).

Is the above reasoning correct? Or do I miss something?
Is there some documentation (apart from code) explaining how buffer
allocation is done in Kafka Broker?


  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message