kafka-users mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Sybrandy, Casey" <Casey.Sybra...@Six3Systems.com>
Subject RE: Consumers re-reading data issue
Date Thu, 13 Dec 2012 13:54:11 GMT
Jun,

Funny you should ask...more data was loaded last night and no data was sent to the brokers
since Dec. 11.  The broker that I thought was the issue has not appeared since I removed the
topic from it, so I'm guessing that's not the problem.

Some other info that may be useful:

We have over 1300 messages about rebalancing in our logs starting at about 9:00 yesterday
morning, so almost 24 hours of log data.  Granted, all I did was a "grep rebalance <logfile>
| wc -l", but that still seems high.  Of these, there are 61 messages stating that it can't
rebalance.

Also, we've seen the following error message 244 times: org.I0Itec.zkclient.exception.ZkNoNodeException:
org.apache.zookeeper.KeeperException$NoNodeException: KeeperErrorCode = NoNode for

Would upgrading to 0.7.1/0.7.2 resolve some/all of these?  Or at least help?  I ask because
I had to compile notes for a potential upgrade and I noticed that there were several changes
regarding rebalancing and duplicate messages.

Thanks.

-----Original Message-----
From: Jun Rao [mailto:junrao@gmail.com] 
Sent: Thursday, December 13, 2012 1:03 AM
To: users@kafka.apache.org
Subject: Re: Consumers re-reading data issue

Casey,

Not sure what's happening there. Is this reproducible?

Thanks,

Jun

On Wed, Dec 12, 2012 at 9:52 AM, Sybrandy, Casey < Casey.Sybrandy@six3systems.com> wrote:

> Hello,
>
> We have a strange issue on one of our systems that we just can't seem 
> to figure out.  We loaded in 5000 records yesterday and have not 
> loaded any since then into our brokers.  Examining the logs for the 
> brokers, we confirmed that no data has been sent to those topics since 
> we did the load yesterday.  However, since the data load yesterday, 
> we've had the same record show up in our consumer logs 5 times.
>
> Offsets have been checked (We log everything INFO and above by 
> default) and it doesn't look like that the consumers are reading from 
> an older offset.  The only thing that I see that may cause this issue 
> is a broker that sometimes is accessed by the consumers, but not all 
> of the time.  It's an older broker that we don't use anymore for our 
> project, but is still there for another project.  Until this morning, 
> it did have the topic we were reading from on the broker.  I did 
> remove the topic by deleting the directories and restarting the 
> broker, so hopefully the consumers don't try to use it again.
>
> Could that be why we're seeing the duplicate records?  Or is there 
> something else we need to check for?
>
> Casey
>
>

Mime
View raw message