kafka-users mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Gerard Klijs <gerard.kl...@dizzit.com>
Subject Re: KafkaProducer Retries in .9.0.1
Date Thu, 07 Apr 2016 04:38:14 GMT
Is it an option to set up a cluster and kill the leader? That's the way we
checked retries and at if we would not lose messages that way.
The sending to Kafka goes in two parts, some serialization etc, before an
attempt is made to really send the binary message, and the actual sending.
I'm not sure, but assume checking the size is part of the first step.

On Thu, Apr 7, 2016, 05:15 christopher palm <cpalm3@gmail.com> wrote:

> Hi Thanks for the suggestion.
> I lowered the broker message.max.bytes to be smaller than my payload so
> that I now receive an
> org.apache.kafka.common.errors.RecordTooLargeException
> :
>
> I still don't see the retries happening, the default back off is 100ms, and
> my producer loops for a few seconds, long enough to trigger the retry.
>
> Is there something else I need to set?
>
> I have tried this with a sync and async producer both with same results
>
> Thanks,
>
> Chris
>
> On Wed, Apr 6, 2016 at 12:01 AM, Manikumar Reddy <
> manikumar.reddy@gmail.com>
> wrote:
>
> > Hi,
> >
> >  Producer message size validation checks ("buffer.memory",
> > "max.request.size" )  happens before
> >  batching and sending messages.  Retry mechanism is applicable for broker
> > side errors and network errors.
> > Try changing "message.max.bytes" broker config property for simulating
> > broker side error.
> >
> >
> >
> >
> >
> >
> > On Wed, Apr 6, 2016 at 9:53 AM, christopher palm <cpalm3@gmail.com>
> wrote:
> >
> > > Hi All,
> > >
> > > I am working with the KafkaProducer using the properties below,
> > > so that the producer keeps trying to send upon failure on Kafka .9.0.1.
> > > I am forcing a failure by setting my buffersize smaller than my
> > > payload,which causes the expected exception below.
> > >
> > > I don't see the producer retry to send on receiving this failure.
> > >
> > > Am I  missing something in the configuration to allow the producer to
> > retry
> > > on failed sends?
> > >
> > > Thanks,
> > > Chris
> > >
> > > .java.util.concurrent.ExecutionException:
> > > org.apache.kafka.common.errors.RecordTooLargeException: The message is
> > 8027
> > > bytes when serialized which is larger than the total memory buffer you
> > have
> > > configured with the buffer.memory configuration.
> > >
> > >  props.put("bootstrap.servers", bootStrapServers);
> > >
> > > props.put("acks", "all");
> > >
> > > props.put("retries", 3);//Try for 3 strikes
> > >
> > > props.put("batch.size", batchSize);//Need to see if this number should
> > > increase under load
> > >
> > > props.put("linger.ms", 1);//After 1 ms fire the batch even if the
> batch
> > > isn't full.
> > >
> > > props.put("buffer.memory", buffMemorySize);
> > >
> > > props.put("max.block.ms",500);
> > >
> > > props.put("max.in.flight.requests.per.connection", 1);
> > >
> > > props.put("key.serializer",
> > > "org.apache.kafka.common.serialization.StringSerializer");
> > >
> > > props.put("value.serializer",
> > > "org.apache.kafka.common.serialization.ByteArraySerializer");
> > >
> >
>

Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message