kafka-users mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Seshadri, Balaji" <Balaji.Sesha...@dish.com>
Subject RE: consumer not consuming messages
Date Fri, 11 Apr 2014 17:10:00 GMT
Are you committing offsets manually after you consume as you mentioned earlier that "auto.commit.offset"
is false.

-----Original Message-----
From: Arjun Kota [mailto:arjun@socialtwist.com] 
Sent: Friday, April 11, 2014 10:56 AM
To: users@kafka.apache.org
Subject: Re: consumer not consuming messages

Console consumer works fine. Its the high level java consumer which is giving this problem.

Thanks
Arjun narasimha kota
On Apr 11, 2014 8:42 PM, "Jun Rao" <junrao@gmail.com> wrote:

> We may have a bug that doesn't observe etch.min.bytes accurately. So a 
> lower fetch.wait.max.ms will improve consumer latency.
>
> Could you run a console consumer and see if you have the same issue? 
> That will tell us if this is a server side issue or an issue just in 
> your consumer.
>
> Thanks,
>
> Jun
>
>
> On Thu, Apr 10, 2014 at 10:28 PM, Arjun <arjun@socialtwist.com> wrote:
>
> > i changed the time to 60 seconds even now i see the same result. The 
> > Consumer is not consuming the messages.
> >
> > Thanks
> > Arjun Narasimha Kota
> >
> >
> > On Friday 11 April 2014 10:36 AM, Arjun wrote:
> >
> >> yup i will change the value and recheck. Thanks for the help.
> >>
> >> thanks
> >> Arjun Narasimha Kota
> >>
> >> On Friday 11 April 2014 10:28 AM, Guozhang Wang wrote:
> >>
> >>> What I tried to say is that it may be caused by your 
> >>> "fetch.wait.max.ms"="180000"
> >>> too large. Try a small value and see if that helps.
> >>>
> >>>
> >>> On Thu, Apr 10, 2014 at 9:44 PM, Arjun <arjun@socialtwist.com> wrote:
> >>>
> >>>  Hi,
> >>>>
> >>>> I could not see any out of memory exceptions in the broker logs. 
> >>>> One thing i can see is i  may have configured consumer poorly. If 
> >>>> its not too much to ask can u let me know the changes i have to 
> >>>> do for over coming this problem.
> >>>>
> >>>> Thanks
> >>>> Arjun Narasimha Kota
> >>>>
> >>>>
> >>>> On Friday 11 April 2014 10:04 AM, Guozhang Wang wrote:
> >>>>
> >>>>  Hi Ajrun,
> >>>>>
> >>>>> It seems to be the cause:
> >>>>>
> >>>>> https://issues.apache.org/jira/browse/KAFKA-1016
> >>>>>
> >>>>> Guozhang
> >>>>>
> >>>>>
> >>>>>
> >>>>> On Thu, Apr 10, 2014 at 9:21 PM, Arjun <arjun@socialtwist.com>
> wrote:
> >>>>>
> >>>>>   I hope this one would give u  a better idea.
> >>>>>
> >>>>>> bin/kafka-run-class.sh kafka.tools.ConsumerOffsetChecker 
> >>>>>> --group
> >>>>>> group1
> >>>>>> --zkconnect zkhost:port --topic testtopic
> >>>>>> Group           Topic                          Pid Offset logSize
> >>>>>> Lag             Owner
> >>>>>> group1          testtopic    0   253             253 0
> >>>>>> group1_ip-xx-1397188061429-b5ff1205-0
> >>>>>> group1          testtopic    1   267             267 0
> >>>>>> group1_ip-xx-1397188061429-b5ff1205-0
> >>>>>> group1          testtopic    2   254             254 0
> >>>>>> group1_ip-xx-1397188061429-b5ff1205-0
> >>>>>> group1          testtopic    3   265             265 0
> >>>>>> group1_ip-xx-1397188061429-b5ff1205-0
> >>>>>> group1          testtopic    4   261             261 0
> >>>>>> group1_ip-xx-1397188061429-b5ff1205-1
> >>>>>> group1          testtopic    5   294             294 0
> >>>>>> group1_ip-xx-1397188061429-b5ff1205-1
> >>>>>> group1          testtopic    6   248             248 0
> >>>>>> group1_ip-xx-1397188061429-b5ff1205-1
> >>>>>> group1          testtopic    7   271             271 0
> >>>>>> group1_ip-xx-1397188061429-b5ff1205-1
> >>>>>> group1          testtopic    8   240             240 0
> >>>>>> group1_ip-xx-1397188061429-b5ff1205-2
> >>>>>> group1          testtopic    9   261             261 0
> >>>>>> group1_ip-xx-1397188061429-b5ff1205-2
> >>>>>> group1          testtopic    10  290             290 0
> >>>>>> group1_ip-xx-1397188061429-b5ff1205-2
> >>>>>> group1          testtopic    11  250             251 1
> >>>>>> group1_ip-xx-1397188061429-b5ff1205-2
> >>>>>>
> >>>>>> If you see the output, in the last line the lag is 1 for that

> >>>>>> partition.
> >>>>>> I
> >>>>>> just send one message. This topic is not new as you see there

> >>>>>> are
> lot
> >>>>>> of
> >>>>>> messages which have accumlated from yesterday. This one message

> >>>>>> will not be consumed by consumer what so ever. But if i send

> >>>>>> some 10 messages
> then
> >>>>>> all
> >>>>>> the messages are consumed.
> >>>>>>
> >>>>>> Please let me know if i have to change any consumer properties.
> >>>>>>
> >>>>>> My consumer properties are :
> >>>>>> "fetch.wait.max.ms"="180000"
> >>>>>> "fetch.min.bytes" = "1"
> >>>>>> "auto.offset.reset" = "smallest"
> >>>>>> "auto.commit.enable"=  "false"
> >>>>>> "fetch.message.max.bytes" = "1048576"
> >>>>>>
> >>>>>>
> >>>>>> Thanks
> >>>>>> Arjun Narasimha Kota
> >>>>>> On Friday 11 April 2014 06:23 AM, Arjun Kota wrote:
> >>>>>>
> >>>>>>   The consumer uses do specific topics.
> >>>>>>
> >>>>>>> On Apr 11, 2014 6:23 AM, "Arjun Kota" <arjun@socialtwist.com
> <mailto:
> >>>>>>> arjun@socialtwist.com>> wrote:
> >>>>>>>
> >>>>>>>       Yes the message shows up on the server.
> >>>>>>>
> >>>>>>>       On Apr 11, 2014 12:07 AM, "Guozhang Wang" <
> wangguoz@gmail.com
> >>>>>>>       <mailto:wangguoz@gmail.com>> wrote:
> >>>>>>>
> >>>>>>>           Hi Arjun,
> >>>>>>>
> >>>>>>>           If you only send one message, does that message
show 
> >>>>>>> up
> on
> >>>>>>> the
> >>>>>>>           server? Does
> >>>>>>>           you consumer use wildcard topics or specific topics?
> >>>>>>>
> >>>>>>>           Guozhang
> >>>>>>>
> >>>>>>>
> >>>>>>>           On Thu, Apr 10, 2014 at 9:20 AM, Arjun < 
> >>>>>>> arjun@socialtwist.com
> >>>>>>>           <mailto:arjun@socialtwist.com>> wrote:
> >>>>>>>
> >>>>>>>           > But  we have auto offset reset to smallest
not 
> >>>>>>> largest, even
> >>>>>>>           then this
> >>>>>>>           > issue arises? If so is there any work around?
> >>>>>>>           >
> >>>>>>>           > Thanks
> >>>>>>>           > Arjun NArasimha Kota
> >>>>>>>           >
> >>>>>>>           >
> >>>>>>>           > On Thursday 10 April 2014 09:39 PM, Guozhang
Wang
> wrote:
> >>>>>>>           >
> >>>>>>>           >> It could be https://issues.apache.org/

> >>>>>>> jira/browse/KAFKA-1006.
> >>>>>>>           >>
> >>>>>>>           >> Guozhang
> >>>>>>>           >>
> >>>>>>>           >>
> >>>>>>>           >> On Thu, Apr 10, 2014 at 8:50 AM, Arjun
> >>>>>>>           <arjun@socialtwist.com 
> >>>>>>> <mailto:arjun@socialtwist.com>>
> >>>>>>> wrote:
> >>>>>>>           >>
> >>>>>>>           >>  its auto created
> >>>>>>>           >>> but even after topic creation this
is the scenario
> >>>>>>>           >>>
> >>>>>>>           >>> Arjun
> >>>>>>>           >>>
> >>>>>>>           >>> On Thursday 10 April 2014 08:41 PM,
Guozhang 
> >>>>>>> Wang
> >>>>>>> wrote:
> >>>>>>>           >>>
> >>>>>>>           >>>  Hi Arjun,
> >>>>>>>           >>>>
> >>>>>>>           >>>> Did you manually create the topic
or use 
> >>>>>>> auto.topic.creation?
> >>>>>>>           >>>>
> >>>>>>>           >>>> Guozhang
> >>>>>>>           >>>>
> >>>>>>>           >>>>
> >>>>>>>           >>>> On Thu, Apr 10, 2014 at 7:39
AM, Arjun
> >>>>>>>           <arjun@socialtwist.com 
> >>>>>>> <mailto:arjun@socialtwist.com>>
> >>>>>>> wrote:
> >>>>>>>           >>>>
> >>>>>>>           >>>>   Hi,
> >>>>>>>           >>>>
> >>>>>>>           >>>>> We have 3 node kafka 0.8
setup with zookeepers 
> >>>>>>> ensemble.
> >>>>>>>           We use high
> >>>>>>>           >>>>> level
> >>>>>>>           >>>>> consumer with auto commit
offset false. I am 
> >>>>>>> facing some
> >>>>>>>           peculiar
> >>>>>>>           >>>>> problem
> >>>>>>>           >>>>> with kafka. When i send some
10-20 messages or 
> >>>>>>> so
> the
> >>>>>>>           consumer starts
> >>>>>>>           >>>>> to
> >>>>>>>           >>>>> consume the messages. But
if  i send only one 
> >>>>>>> message to
> >>>>>>>           kafka, then
> >>>>>>>           >>>>> even
> >>>>>>>           >>>>> though consumer is active
it is not trying to 
> >>>>>>> fetch the
> >>>>>>>           message. There
> >>>>>>>           >>>>> is
> >>>>>>>           >>>>> nothing in logs, just the
messages are being
> fetched
> >>>>>>> by
> >>>>>>>           the kafka
> >>>>>>>           >>>>> consumer.
> >>>>>>>           >>>>> The messages are there in
the Kafka server. 
> >>>>>>> Can
> some
> >>>>>>> one
> >>>>>>>           let me know
> >>>>>>>           >>>>> where
> >>>>>>>           >>>>> i am doing wrong.
> >>>>>>>           >>>>>
> >>>>>>>           >>>>>
> >>>>>>>           >>>>> Thanks
> >>>>>>>           >>>>> Arjun Narasimha Kota
> >>>>>>>           >>>>>
> >>>>>>>           >>>>>
> >>>>>>>           >>>>>
> >>>>>>>           >>>>
> >>>>>>>           >>
> >>>>>>>           >
> >>>>>>>
> >>>>>>>
> >>>>>>>           --
> >>>>>>>           -- Guozhang
> >>>>>>>
> >>>>>>>
> >>>>>>>
> >>>>>>>
> >>>
> >>
> >
>

Mime
View raw message