kafka-users mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Ismael Juma <ism...@juma.me.uk>
Subject Re: KafkaProducer Retries in .9.0.1
Date Thu, 21 Apr 2016 12:06:44 GMT
This is being tracked in KAFKA-3594 for others who find this.

Ismael

On Wed, Apr 20, 2016 at 10:45 PM, Ismael Juma <ismael@juma.me.uk> wrote:

> Hi Nicolas,
>
> That seems to be a different issue than the one initially discussed in
> this thread. I suggest starting a new mailing list thread with the steps
> required to reproduce the problem.
>
> Thanks,
> Ismael
>
> On Wed, Apr 20, 2016 at 10:41 PM, Nicolas Phung <nicolas.phung@gmail.com>
> wrote:
>
>> Hi Ismail,
>>
>> Thanks for you reply.
>>
>> For me, It's happening when I'm doing various breakdown (shutting down
>> instances / zookeeper) on Kafka brokers on 0.9.0.1 that should simulate a
>> leader is not available case. The same kind of breakdown on 0.8.2.2
>> client/broker can retry as expected.
>>
>> From my understanding, if the broker (leader) is unavailable, it should
>> buffer the message in the producer till the broker is available.
>>
>> Regards,
>> Nicolas
>>
>> On Thu, Apr 21, 2016 at 7:33 AM, Ismael Juma <ismael@juma.me.uk> wrote:
>>
>> > Hi,
>> >
>> > This was explained earlier, I think. Retries are only attempted for
>> > retriable errors. If a message is too large, retrying won't help (it
>> will
>> > still be too large). However, if a leader is not available, then a retry
>> > will happen as the leader may be available then.
>> >
>> > Ismael
>> >
>> > On Wed, Apr 20, 2016 at 1:00 PM, Nicolas Phung <nicolas.phung@gmail.com
>> >
>> > wrote:
>> >
>> > > Hello,
>> > >
>> > > Have you solved this ? I'm encountering the same issue with the new
>> > > Producer on 0.9.0.1 client with a 0.9.0.1 Kafka broker. We tried the
>> same
>> > > various breakdown (kafka(s), zookeeper) with 0.8.2.2 client and Kafka
>> > > broker 0.8.2.2 and the retries work as expected on the older version.
>> I'm
>> > > going to take a look if someone else has filed a related issue about
>> it.
>> > >
>> > > Regards,
>> > > Nicolas PHUNG
>> > >
>> > > On Thu, Apr 7, 2016 at 5:15 AM, christopher palm <cpalm3@gmail.com>
>> > wrote:
>> > >
>> > > > Hi Thanks for the suggestion.
>> > > > I lowered the broker message.max.bytes to be smaller than my
>> payload so
>> > > > that I now receive an
>> > > > org.apache.kafka.common.errors.RecordTooLargeException
>> > > > :
>> > > >
>> > > > I still don't see the retries happening, the default back off is
>> 100ms,
>> > > and
>> > > > my producer loops for a few seconds, long enough to trigger the
>> retry.
>> > > >
>> > > > Is there something else I need to set?
>> > > >
>> > > > I have tried this with a sync and async producer both with same
>> results
>> > > >
>> > > > Thanks,
>> > > >
>> > > > Chris
>> > > >
>> > > > On Wed, Apr 6, 2016 at 12:01 AM, Manikumar Reddy <
>> > > > manikumar.reddy@gmail.com>
>> > > > wrote:
>> > > >
>> > > > > Hi,
>> > > > >
>> > > > >  Producer message size validation checks ("buffer.memory",
>> > > > > "max.request.size" )  happens before
>> > > > >  batching and sending messages.  Retry mechanism is applicable
for
>> > > broker
>> > > > > side errors and network errors.
>> > > > > Try changing "message.max.bytes" broker config property for
>> > simulating
>> > > > > broker side error.
>> > > > >
>> > > > >
>> > > > >
>> > > > >
>> > > > >
>> > > > >
>> > > > > On Wed, Apr 6, 2016 at 9:53 AM, christopher palm <
>> cpalm3@gmail.com>
>> > > > wrote:
>> > > > >
>> > > > > > Hi All,
>> > > > > >
>> > > > > > I am working with the KafkaProducer using the properties
below,
>> > > > > > so that the producer keeps trying to send upon failure on
Kafka
>> > > .9.0.1.
>> > > > > > I am forcing a failure by setting my buffersize smaller
than my
>> > > > > > payload,which causes the expected exception below.
>> > > > > >
>> > > > > > I don't see the producer retry to send on receiving this
>> failure.
>> > > > > >
>> > > > > > Am I  missing something in the configuration to allow the
>> producer
>> > to
>> > > > > retry
>> > > > > > on failed sends?
>> > > > > >
>> > > > > > Thanks,
>> > > > > > Chris
>> > > > > >
>> > > > > > .java.util.concurrent.ExecutionException:
>> > > > > > org.apache.kafka.common.errors.RecordTooLargeException:
The
>> message
>> > > is
>> > > > > 8027
>> > > > > > bytes when serialized which is larger than the total memory
>> buffer
>> > > you
>> > > > > have
>> > > > > > configured with the buffer.memory configuration.
>> > > > > >
>> > > > > >  props.put("bootstrap.servers", bootStrapServers);
>> > > > > >
>> > > > > > props.put("acks", "all");
>> > > > > >
>> > > > > > props.put("retries", 3);//Try for 3 strikes
>> > > > > >
>> > > > > > props.put("batch.size", batchSize);//Need to see if this
number
>> > > should
>> > > > > > increase under load
>> > > > > >
>> > > > > > props.put("linger.ms", 1);//After 1 ms fire the batch even
if
>> the
>> > > > batch
>> > > > > > isn't full.
>> > > > > >
>> > > > > > props.put("buffer.memory", buffMemorySize);
>> > > > > >
>> > > > > > props.put("max.block.ms",500);
>> > > > > >
>> > > > > > props.put("max.in.flight.requests.per.connection", 1);
>> > > > > >
>> > > > > > props.put("key.serializer",
>> > > > > > "org.apache.kafka.common.serialization.StringSerializer");
>> > > > > >
>> > > > > > props.put("value.serializer",
>> > > > > > "org.apache.kafka.common.serialization.ByteArraySerializer");
>> > > > > >
>> > > > >
>> > > >
>> > >
>> >
>>
>
>

Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message