kafka-users mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Jun Rao <jun...@gmail.com>
Subject Re: Offset Out Of Range Exception
Date Thu, 06 Oct 2011 16:17:52 GMT
Instead of using MAX_LONG, we can just set the initial offset in the
consumer to either the smallest or the largest based on the config directly,
instead of waiting for a WrongOffsetException. This means that in addition
to Fetcher, we need to call getLatestOffset in ZookeeperConsumerConnector as
well.

Jun

On Thu, Oct 6, 2011 at 8:55 AM, Jay Kreps <jay.kreps@gmail.com> wrote:

> Yes, perhaps one fix would be to overload the offset MIN_LONG/MAX_LONG to
> mean first/last.
>
> -Jay
>
> On Thu, Oct 6, 2011 at 8:47 AM, Jun Rao <junrao@gmail.com> wrote:
>
> > Thai,
> >
> > When a consumer starts off for the first time, it doesn't know which
> offset
> > to start with. So it picks max_long, which will trigger the
> > WrongOffsetException. Upon receiving the exception, the consumer will
> reset
> > its offset to either the smallest or the largest offset depending on the
> > configuration. So, technically, this is not really an error and I agree
> > that
> > it's confusing to log this as an error. Could you open a jira for this?
> >
> > Thanks,
> >
> > Jun
> >
> > On Thu, Oct 6, 2011 at 4:08 AM, Bao Thai Ngo <baothaingo@gmail.com>
> wrote:
> >
> > > Hi,
> > >
> > > I followed the quickstart to setup and run kafka on my Laptop CentOS
> 5.5
> > > and
> > > got this error:
> > >
> > > [2011-10-06 17:21:20,349] INFO Starting log flusher every 1000 ms with
> > the
> > > following overrides Map() (kafka.log.LogManager)
> > > [2011-10-06 17:21:20,350] INFO Server started.
> (kafka.server.KafkaServer)
> > > [2011-10-06 17:22:28,366] INFO Closing socket connection to /
> 10.40.9.171
> > .
> > > (kafka.network.Processor)
> > > [2011-10-06 17:38:53,725] INFO Closing socket connection to /127.0.0.1
> .
> > > (kafka.network.Processor)
> > > [2011-10-06 17:39:01,445] INFO Created log for 'testxx'-0
> > > (kafka.log.LogManager)
> > > [2011-10-06 17:39:01,448] INFO Begin registering broker topic
> > > /brokers/topics/testxx/0 with 1 partitions
> (kafka.server.KafkaZooKeeper)
> > > [2011-10-06 17:39:01,472] INFO End registering broker topic
> > > /brokers/topics/testxx/0 (kafka.server.KafkaZooKeeper)
> > > [2011-10-06 17:39:50,641] ERROR error when processing request
> > > FetchRequest(topic:testxx, part:0 offset:9223372036854775807
> > > maxSize:307200)
> > > (kafka.server.KafkaRequestHandlers)
> > > kafka.common.OffsetOutOfRangeException: offset 9223372036854775807 is
> out
> > > of
> > > range
> > >        at kafka.log.Log$.findRange(Log.scala:48)
> > >        at kafka.log.Log.read(Log.scala:224)
> > >        at
> > >
> > >
> >
> kafka.server.KafkaRequestHandlers.kafka$server$KafkaRequestHandlers$$readMessageSet(KafkaRequestHandlers.scala:116)
> > >        at
> > >
> > >
> >
> kafka.server.KafkaRequestHandlers$$anonfun$2.apply(KafkaRequestHandlers.scala:106)
> > >        at
> > >
> > >
> >
> kafka.server.KafkaRequestHandlers$$anonfun$2.apply(KafkaRequestHandlers.scala:105)
> > >        at
> > >
> > >
> >
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:206)
> > >        at
> > >
> > >
> >
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:206)
> > >        at
> > >
> > >
> >
> scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:34)
> > >        at scala.collection.mutable.ArrayOps.foreach(ArrayOps.scala:34)
> > >        at
> > > scala.collection.TraversableLike$class.map(TraversableLike.scala:206)
> > >        at scala.collection.mutable.ArrayOps.map(ArrayOps.scala:34)
> > >        at
> > >
> > >
> >
> kafka.server.KafkaRequestHandlers.handleMultiFetchRequest(KafkaRequestHandlers.scala:105)
> > >        at
> > >
> > >
> >
> kafka.server.KafkaRequestHandlers$$anonfun$handlerFor$3.apply(KafkaRequestHandlers.scala:45)
> > >        at
> > >
> > >
> >
> kafka.server.KafkaRequestHandlers$$anonfun$handlerFor$3.apply(KafkaRequestHandlers.scala:45)
> > >        at kafka.network.Processor.handle(SocketServer.scala:289)
> > >        at kafka.network.Processor.read(SocketServer.scala:312)
> > >        at kafka.network.Processor.run(SocketServer.scala:207)
> > >        at java.lang.Thread.run(Thread.java:662)
> > >
> > >
> > > Also please keep informed that this error did happen several times in
> my
> > > test (version 0.6 and 0.7). Could you please help me fix this?
> > >
> > > Thanks,
> > > ~Thai
> > >
> >
>

Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message