kafka-users mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Mayuresh Gharat <gharatmayures...@gmail.com>
Subject Re: messages lost
Date Wed, 07 Jan 2015 05:25:17 GMT
Try doing .get() on the future returned by the new producer. It should
guarantee that message has made to kafka.

Thanks,

Mayuresh

On Tue, Jan 6, 2015 at 4:21 PM, Sa Li <salicn@gmail.com> wrote:

> Hi, experts
>
> Again, we still having the issues of losing data, see we see data 5000
> records, but only find 4500 records on brokers, we did set required.acks -1
> to make sure all brokers ack, but that only cause the long latency, but not
> cure the data lost.
>
>
> thanks
>
>
> On Mon, Jan 5, 2015 at 9:55 AM, Xiaoyu Wang <xwang@rocketfuel.com> wrote:
>
> > @Sa,
> >
> > the required.acks is producer side configuration. Set to -1 means
> requiring
> > ack from all brokers.
> >
> > On Fri, Jan 2, 2015 at 1:51 PM, Sa Li <salicn@gmail.com> wrote:
> >
> > > Thanks a lot, Tim, this is the config of brokers
> > >
> > > ----------
> > > broker.id=1
> > > port=9092
> > > host.name=10.100.70.128
> > > num.network.threads=4
> > > num.io.threads=8
> > > socket.send.buffer.bytes=1048576
> > > socket.receive.buffer.bytes=1048576
> > > socket.request.max.bytes=104857600
> > > auto.leader.rebalance.enable=true
> > > auto.create.topics.enable=true
> > > default.replication.factor=3
> > >
> > > log.dirs=/tmp/kafka-logs-1
> > > num.partitions=8
> > >
> > > log.flush.interval.messages=10000
> > > log.flush.interval.ms=1000
> > > log.retention.hours=168
> > > log.segment.bytes=536870912
> > > log.cleanup.interval.mins=1
> > >
> > > zookeeper.connect=10.100.70.128:2181,10.100.70.28:2181,
> 10.100.70.29:2181
> > > zookeeper.connection.timeout.ms=1000000
> > >
> > > -----------------------
> > >
> > >
> > > We actually play around request.required.acks in producer config, -1
> > cause
> > > long latency, 1 is the parameter to cause messages lost. But I am not
> > sure,
> > > if this is the reason to lose the records.
> > >
> > >
> > > thanks
> > >
> > > AL
> > >
> > >
> > >
> > >
> > >
> > >
> > >
> > > On Fri, Jan 2, 2015 at 9:59 AM, Timothy Chen <tnachen@gmail.com>
> wrote:
> > >
> > > > What's your configured required.acks? And also are you waiting for
> all
> > > > your messages to be acknowledged as well?
> > > >
> > > > The new producer returns futures back, but you still need to wait for
> > > > the futures to complete.
> > > >
> > > > Tim
> > > >
> > > > On Fri, Jan 2, 2015 at 9:54 AM, Sa Li <salicn@gmail.com> wrote:
> > > > > Hi, all
> > > > >
> > > > > We are sending the message from a producer, we send 100000 records,
> > but
> > > > we
> > > > > see only 99573 records for that topics, we confirm this by consume
> > this
> > > > > topic and check the log size in kafka web console.
> > > > >
> > > > > Any ideas for the message lost, what is the reason to cause this?
> > > > >
> > > > > thanks
> > > > >
> > > > > --
> > > > >
> > > > > Alec Li
> > > >
> > >
> > >
> > >
> > > --
> > >
> > > Alec Li
> > >
> >
>
>
>
> --
>
> Alec Li
>



-- 
-Regards,
Mayuresh R. Gharat
(862) 250-7125

Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message