kafka-users mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Neha Narkhede <neha.narkh...@gmail.com>
Subject Re: kafka.common.OffsetOutOfRangeException
Date Wed, 10 Aug 2011 16:54:50 GMT
Evan,

>> 1) How can we check if Kafka has received the data?

If you go to ${log.dir}/your-topic, you should see some log files if Kafka
server received the data

>> 2) Is there a easy way to check if Kafka is filling up, and not able to
receive messages?

By "kafka filling up", do you mean running out of disk space ? In that case,
kafka server will get an IOException and will shut itself down.

>> 3) Are there ways to otherwise see the health of Kafka?

If you are asking for an admin console, we currently don't have that.
Although, I'm guessing it will be helpful to have one.

Thanks,
Neha

On Tue, Aug 9, 2011 at 11:10 AM, Evan Chan <ev@ooyala.com> wrote:

> Neha,
>
> We haven't been able to get data through.  Some questions
> 1) How can we check if Kafka has received the data?
> 2) Is there a easy way to check if Kafka is filling up, and not able to
> receive messages?
> 3) Are there ways to otherwise see the health of Kafka?
>
> thanks,
> Evan
>
>
> On Tue, Aug 9, 2011 at 10:57 AM, Neha Narkhede <neha.narkhede@gmail.com
> >wrote:
>
> > Evan,
> >
> > These exceptions are normal. Basically, the first time a ZK consumer
> starts
> > up and finds no previous offset information in zookeeper, it defaults to
> > Long.MAX_VALUE. This is done to trigger the OffsetOutOfRangeException
> > handling code path that resets the offset to the correct value based on
> > "autooffset.reset".
> >
> > Are you able to produce and consume the data correctly, in spite of these
> > errors in the logs ?
> >
> > Thanks,
> > Neha
> >
> >
> > On Tue, Aug 9, 2011 at 10:37 AM, Evan Chan <ev@ooyala.com> wrote:
> >
> > > Hi guys,
> > >
> > > I'm following along the Quick Start and ran into an issue when trying
> to
> > > use
> > > a custom Encoder.    I'm basically trying to use a Thrift-encoded
> > message.
> > >  Here is my setup:
> > > - Local ZK, Kafka, consumer and producer all on one machine
> > > - One topic set up with 10 partitions
> > > - Just one consumer which takes all 10 partitions
> > > - Consumer has     props.put("autooffset.reset", "largest")
> > >  in order not to read earlier messages
> > > - Writing producer and consumer in Scala
> > >
> > > ZK and Kafka are started.  Then I start the consumer, followed by the
> > > producer.
> > > Consumer shows these errors:
> > >
> > > 11/08/09 09:45:45 INFO consumer.ZookeeperConsumerConnector: Consumer
> > > player-logs-QoS_eng-dynamic-217.v101.mtv-1312908345470 selected
> > partitions
> > > :
> > >
> > >
> >
> player_logs:0-0,player_logs:0-1,player_logs:0-2,player_logs:0-3,player_logs:0-4,player_logs:0-5,player_logs:0-6,player_logs:0-7,player_logs:0-8,player_logs:0-9
> > > 11/08/09 09:45:45 INFO consumer.ZookeeperConsumerConnector: end
> > rebalancing
> > > consumer player-logs-QoS_eng-dynamic-217.v101.mtv-1312908345470 try #0
> > > [!] Starting Stream
> > > 11/08/09 09:45:45 INFO consumer.FetcherRunnable: FetchRunnable-0 start
> > > fetching topic: player_logs part: 1 offset: 9223372036854775807 from
> > > 172.16.100.238:9092
> > > 11/08/09 09:45:45 INFO consumer.FetcherRunnable: FetchRunnable-0 start
> > > fetching topic: player_logs part: 7 offset: 9223372036854775807 from
> > > 172.16.100.238:9092
> > > 11/08/09 09:45:45 INFO consumer.FetcherRunnable: FetchRunnable-0 start
> > > fetching topic: player_logs part: 0 offset: 9223372036854775807 from
> > > 172.16.100.238:9092
> > > 11/08/09 09:45:45 INFO consumer.FetcherRunnable: FetchRunnable-0 start
> > > fetching topic: player_logs part: 8 offset: 9223372036854775807 from
> > > 172.16.100.238:9092
> > > 11/08/09 09:45:45 INFO consumer.FetcherRunnable: FetchRunnable-0 start
> > > fetching topic: player_logs part: 5 offset: 9223372036854775807 from
> > > 172.16.100.238:9092
> > > 11/08/09 09:45:45 INFO consumer.FetcherRunnable: FetchRunnable-0 start
> > > fetching topic: player_logs part: 6 offset: 9223372036854775807 from
> > > 172.16.100.238:9092
> > > 11/08/09 09:45:45 INFO consumer.FetcherRunnable: FetchRunnable-0 start
> > > fetching topic: player_logs part: 4 offset: 9223372036854775807 from
> > > 172.16.100.238:9092
> > > 11/08/09 09:45:45 INFO consumer.FetcherRunnable: FetchRunnable-0 start
> > > fetching topic: player_logs part: 3 offset: 9223372036854775807 from
> > > 172.16.100.238:9092
> > > 11/08/09 09:45:45 INFO consumer.FetcherRunnable: FetchRunnable-0 start
> > > fetching topic: player_logs part: 9 offset: 9223372036854775807 from
> > > 172.16.100.238:9092
> > > 11/08/09 09:45:45 INFO consumer.FetcherRunnable: FetchRunnable-0 start
> > > fetching topic: player_logs part: 2 offset: 9223372036854775807 from
> > > 172.16.100.238:9092
> > > 11/08/09 09:45:45 INFO consumer.FetcherRunnable: offset
> > 9223372036854775807
> > > out of range
> > > 11/08/09 09:45:45 INFO consumer.FetcherRunnable: updating partition 0-1
> > > with
> > > earliest offset 0
> > > 11/08/09 09:45:45 INFO consumer.FetcherRunnable: offset
> > 9223372036854775807
> > > out of range
> > > 11/08/09 09:45:45 INFO consumer.FetcherRunnable: updating partition 0-7
> > > with
> > > earliest offset 0
> > > 11/08/09 09:45:45 INFO consumer.FetcherRunnable: offset
> > 9223372036854775807
> > > out of range
> > > 11/08/09 09:45:45 INFO consumer.FetcherRunnable: updating partition 0-0
> > > with
> > > earliest offset 0
> > > 11/08/09 09:45:45 INFO consumer.FetcherRunnable: offset
> > 9223372036854775807
> > > out of range
> > > 11/08/09 09:45:45 INFO consumer.FetcherRunnable: updating partition 0-8
> > > with
> > > earliest offset 0
> > > 11/08/09 09:45:45 INFO consumer.FetcherRunnable: offset
> > 9223372036854775807
> > > out of range
> > > 11/08/09 09:45:45 INFO consumer.FetcherRunnable: updating partition 0-5
> > > with
> > > earliest offset 0
> > > 11/08/09 09:45:45 INFO consumer.FetcherRunnable: offset
> > 9223372036854775807
> > > out of range
> > > 11/08/09 09:45:45 INFO consumer.FetcherRunnable: updating partition 0-6
> > > with
> > > earliest offset 0
> > > 11/08/09 09:45:45 INFO consumer.FetcherRunnable: offset
> > 9223372036854775807
> > > out of range
> > > 11/08/09 09:45:45 INFO consumer.FetcherRunnable: updating partition 0-4
> > > with
> > > earliest offset 0
> > > 11/08/09 09:45:45 INFO consumer.FetcherRunnable: offset
> > 9223372036854775807
> > > out of range
> > > 11/08/09 09:45:45 INFO consumer.FetcherRunnable: updating partition 0-3
> > > with
> > > earliest offset 0
> > > 11/08/09 09:45:45 INFO consumer.FetcherRunnable: offset
> > 9223372036854775807
> > > out of range
> > > 11/08/09 09:45:45 INFO consumer.FetcherRunnable: updating partition 0-9
> > > with
> > > earliest offset 0
> > > 11/08/09 09:45:45 INFO consumer.FetcherRunnable: offset
> > 9223372036854775807
> > > out of range
> > > 11/08/09 09:45:45 INFO consumer.FetcherRunnable: updating partition 0-2
> > > with
> > > earliest offset 0
> > > ^
> > >
> > >
> > > Kafka shows these errors:
> > >
> > > [2011-08-09 09:45:45,842] ERROR error when processing request
> > > topic:player_logs, part:3 offset:9223372036854775807 maxSize:307200
> > > (kafka.server.KafkaRequestHandlers)
> > > kafka.common.OffsetOutOfRangeException: offset 9223372036854775807 is
> out
> > > of
> > > range
> > > at kafka.log.Log$.findRange(Log.scala:47)
> > > at kafka.log.Log.read(Log.scala:223)
> > > at
> > >
> > >
> >
> kafka.server.KafkaRequestHandlers.kafka$server$KafkaRequestHandlers$$readMessageSet(KafkaRequestHandlers.scala:124)
> > > at
> > >
> > >
> >
> kafka.server.KafkaRequestHandlers$$anonfun$3.apply(KafkaRequestHandlers.scala:115)
> > > at
> > >
> > >
> >
> kafka.server.KafkaRequestHandlers$$anonfun$3.apply(KafkaRequestHandlers.scala:114)
> > > at
> > >
> > >
> >
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:206)
> > > at
> > >
> > >
> >
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:206)
> > > at
> > >
> > >
> >
> scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:34)
> > > at scala.collection.mutable.ArrayOps.foreach(ArrayOps.scala:34)
> > > at
> scala.collection.TraversableLike$class.map(TraversableLike.scala:206)
> > > at scala.collection.mutable.ArrayOps.map(ArrayOps.scala:34)
> > > at
> > >
> > >
> >
> kafka.server.KafkaRequestHandlers.handleMultiFetchRequest(KafkaRequestHandlers.scala:114)
> > > at
> > >
> > >
> >
> kafka.server.KafkaRequestHandlers$$anonfun$handlerFor$3.apply(KafkaRequestHandlers.scala:43)
> > > at
> > >
> > >
> >
> kafka.server.KafkaRequestHandlers$$anonfun$handlerFor$3.apply(KafkaRequestHandlers.scala:43)
> > > at kafka.network.Processor.handle(SocketServer.scala:268)
> > > at kafka.network.Processor.read(SocketServer.scala:291)
> > > at kafka.network.Processor.run(SocketServer.scala:202)
> > > at java.lang.Thread.run(Thread.java:680)
> > >
> > >
> > >
> > > Any ideas?
> > >
> > > I've occasionally seen errors like these when testing with the default
> > > StringEncoder as well.
> > >
> > > thanks,
> > > Evan
> > >
> > > --
> > > --
> > > *Evan Chan*
> > > Senior Software Engineer |
> > > ev@ooyala.com | (650) 996-4600
> > > www.ooyala.com | blog <http://www.ooyala.com/blog> |
> > > @ooyala<http://www.twitter.com/ooyala>
> > >
> >
>
>
>
> --
> --
> *Evan Chan*
> Senior Software Engineer |
> ev@ooyala.com | (650) 996-4600
> www.ooyala.com | blog <http://www.ooyala.com/blog> |
> @ooyala<http://www.twitter.com/ooyala>
>

Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message