kafka-users mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Guozhang Wang <wangg...@gmail.com>
Subject Re: question about synchronous producer
Date Thu, 05 Jun 2014 23:53:29 GMT
Libo, did you see any exception/error entries on the producer log?

Guozhang


On Thu, Jun 5, 2014 at 10:33 AM, Libo Yu <yu_libo@hotmail.com> wrote:

> Yes. I used three sync producers with request.required.acks=1. I let them
> publish 2k short messages and in the process I restart all zookeeper and
> kafka processes ( 3 hosts in a cluster). Normally there will be message
> loss after 3 restarts. After 3 restarts, I use a consumer to retrieve the
> messages and do the verification.
>
> > Date: Thu, 5 Jun 2014 10:15:18 -0700
> > Subject: Re: question about synchronous producer
> > From: wangguoz@gmail.com
> > To: users@kafka.apache.org
> >
> > Libo,
> >
> > For clarification, you can use sync producer to reproduce this issue?
> >
> > Guozhang
> >
> >
> > On Thu, Jun 5, 2014 at 10:03 AM, Libo Yu <yu_libo@hotmail.com> wrote:
> >
> > > When all the  brokers are down the producer should retry for a few
> times
> > > and throw FailedToSendMessageException. And user code can catch the
> > > exception and retry after a backoff. However, in my tests, no
> exception was
> > > caught and the message was lost silently. My broker is 0.8.1.1 and my
> > > client is 0.8.0. It is fairly easy to reproduce. Any insight on this
> issue?
> > >
> > > Libo
> > >
> > > > Date: Thu, 5 Jun 2014 09:05:27 -0700
> > > > Subject: Re: question about synchronous producer
> > > > From: wangguoz@gmail.com
> > > > To: users@kafka.apache.org
> > > >
> > > > When the producer exhausted all the retries it will drop the message
> on
> > > the
> > > > floor. So when the broker is down for too long there will be data
> loss.
> > > >
> > > > Guozhang
> > > >
> > > >
> > > > On Thu, Jun 5, 2014 at 6:20 AM, Libo Yu <yu_libo@hotmail.com> wrote:
> > > >
> > > > > I want to know why there will be message loss when brokers are
> down for
> > > > > too long.
> > > > > I've noticed message loss when brokers are restarted during
> > > publishing. It
> > > > > is a sync producer with request.required.acks set to 1.
> > > > >
> > > > > Libo
> > > > >
> > > > > > Date: Thu, 29 May 2014 20:11:48 -0700
> > > > > > Subject: Re: question about synchronous producer
> > > > > > From: wangguoz@gmail.com
> > > > > > To: users@kafka.apache.org
> > > > > >
> > > > > > Libo,
> > > > > >
> > > > > > That is correct. You may want to increase the retry.backoff.ms
> in
> > > this
> > > > > > case. In practice, if the brokers are down for too long, then
> data
> > > loss
> > > > > is
> > > > > > usually inevitable.
> > > > > >
> > > > > > Guozhang
> > > > > >
> > > > > >
> > > > > > On Thu, May 29, 2014 at 2:55 PM, Libo Yu <yu_libo@hotmail.com>
> > > wrote:
> > > > > >
> > > > > > > Hi team,
> > > > > > >
> > > > > > > Assume I am using a synchronous producer and it has the
> following
> > > > > default
> > > > > > > properties:
> > > > > > >
> > > > > > > message.send.max.retries
> > > > > > >       3
> > > > > > > retry.backoff.ms
> > > > > > >       100
> > > > > > >
> > > > > > > I use java api Producer.send(message) to send a message.
> > > > > > > While send() is being called, if the brokers are shutdown,
what
> > > > > happens?
> > > > > > > send() will retry 3 times with a 100ms interval and fail
> silently?
> > > > > > > If I don't want to lose any message when the brokers are
back
> > > online,
> > > > > what
> > > > > > > should I do? Thanks.
> > > > > > >
> > > > > > > Libo
> > > > > > >
> > > > > > >
> > > > > > >
> > > > > > >
> > > > > >
> > > > > >
> > > > > >
> > > > > >
> > > > > > --
> > > > > > -- Guozhang
> > > > >
> > > > >
> > > >
> > > >
> > > >
> > > > --
> > > > -- Guozhang
> > >
> > >
> >
> >
> >
> > --
> > -- Guozhang
>
>



-- 
-- Guozhang

Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message