kafka-users mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Tech Bolek <techy_bo...@yahoo.com.INVALID>
Subject Re: kafka “stops working” after a large message is enqueued
Date Wed, 03 Feb 2016 23:06:04 GMT
Deleted the topic and recreated (with max bytes set) but that did not help.What helped though
is upping the java heap size.I monitored the consumer with jstat. I noticed 2 full garbage
collection attempts right after publishing the large message. After that the consumer appeared
dormant. Upping the java heap size allowed to consume the message. Wondering why the consumer
remained silent, i.e. no out of heap memory error or anything. 

    On Tuesday, February 2, 2016 8:35 PM, Joe Lawson <jlawson@opensourceconnections.com>
wrote:
 

 Make sure the topic is created after message Max bytes is set.
On Feb 2, 2016 9:04 PM, "Tech Bolek" <techy_bolek@yahoo.com.invalid> wrote:

> I'm running kafka_2.11-0.9.0.0 and a java-based producer/consumer. With
> messages ~70 KB everything works fine. However, after the producer enqueues
> a larger, 70 MB  message, kafka appears to stop delivering the messages to
> the consumer. I.e. not only is the large message not delivered but also
> subsequent smaller messages. I know the producer succeeds because I use
> kafka callback for the confirmation and I can see the messages in the kafka
> message log.
> kafka config custom changes:
>    message.max.bytes=200000000    replica.fetch.max.bytes=200000000
> consumer config:
>  props.put("fetch.message.max.bytes",  "200000000");
> props.put("max.partition.fetch.bytes", "200000000");
>


  
Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message