kafka-users mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Akhilesh Pathodia <pathodia.akhil...@gmail.com>
Subject Re: Kafka producer drops large messages
Date Tue, 11 Apr 2017 20:06:26 GMT
Hi Smirit,

You will have to change some of broker configuration like message.max.bytes
to a larger value. The default value is 1 MB guess.

Please check below configs:

Broker Configuration



   Maximum message size the broker will accept. Must be smaller than the
   consumer fetch.message.max.bytes, or the consumer cannot consume the

   Default value: 1000000 (1 MB)


   Size of a Kafka data file. Must be larger than any single message.

   Default value: 1073741824 (1 GiB)


   Maximum message size a broker can replicate. Must be larger than
   message.max.bytes, or a broker can accept messages it cannot replicate,
   potentially resulting in data loss.

   Default value: 1048576 (1 MiB)


On Wed, Apr 12, 2017 at 12:23 AM, Smriti Jha <smriti@agolo.com> wrote:

> Hello all,
> Can somebody shed light on kafka producer's behavior when the total size of
> all messages in the buffer (bounded by queue.buffering.max.ms) exceeds the
> socket buffer size (send.buffer.bytes)?
> I'm using Kafka v0.8.2 with the old Producer API and have noticed that our
> systems are dropping a few messages that are closer to 1MB in size. A few
> messages that are only a few KBs in size and are attempted to be sent
> around the same time as >1MB messages also get dropped. The official
> documentation does talk about never dropping a "send" in case the buffer
> has reached queue.buffering.max.messages but I don't think that applies to
> size of the messages.
> Thanks!

  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message