kafka-users mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Florian Leibert <...@leibert.de>
Subject Re: Errors in load tests
Date Fri, 09 Dec 2011 20:03:24 GMT
I verified that if I kill one of the brokers, I do no longer see the
MessageSize exceptions. However with two of them, I see it.

On Fri, Dec 9, 2011 at 11:55 AM, Florian Leibert <flo@leibert.de> wrote:

> So what would explain the empty messages then?
>
>
> On Fri, Dec 9, 2011 at 11:48 AM, Jay Kreps <jay.kreps@gmail.com> wrote:
>
>> Yeah this is really just bad logging. Our way of initializing a client
>> that
>> has no position in the log (no existing offset) was to try an impossible
>> offset and reset based on the client settings (e.g. reset to the latest
>> offset). The problem is the way it is logged it looks like an error.
>>
>> Here is the JIRA:
>> https://issues.apache.org/jira/browse/KAFKA-89
>>
>> It is fixed on trunk.
>>
>> -Jay
>>
>> On Fri, Dec 9, 2011 at 10:52 AM, Florian Leibert <flo@leibert.de> wrote:
>>
>> > Hi -
>> > I'm running some load tests on Kafka - two brokers, one producer, one
>> > consumer (locally, just wanted to test the partitioning).
>> > I'm using the default configuration but each broker has been changed to
>> > have globally 8 partitions.
>> >
>> > After some time I start seeing more and more of these errors:
>> >
>> > [2011-12-09 10:39:11,059] ERROR error when processing request topic:d3,
>> > part:3 offset:9223372036854775807 maxSize:307200
>> > (kafka.server.KafkaRequestHandlers)
>> > kafka.common.OffsetOutOfRangeException: offset 9223372036854775807 is
>> out
>> > of range
>> > at kafka.log.Log$.findRange(Log.scala:47)
>> > at kafka.log.Log.read(Log.scala:223)
>> > at
>> >
>> >
>> kafka.server.KafkaRequestHandlers.kafka$server$KafkaRequestHandlers$$readMessageSet(KafkaRequestHandlers.scala:124)
>> > at
>> >
>> >
>> kafka.server.KafkaRequestHandlers$$anonfun$3.apply(KafkaRequestHandlers.scala:115)
>> > at
>> >
>> >
>> kafka.server.KafkaRequestHandlers$$anonfun$3.apply(KafkaRequestHandlers.scala:114)
>> > at
>> >
>> >
>> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:206)
>> > at
>> >
>> >
>> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:206)
>> > at
>> >
>> >
>> scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:34)
>> > at scala.collection.mutable.ArrayOps.foreach(ArrayOps.scala:34)
>> > at scala.collection.TraversableLike$class.map(TraversableLike.scala:206)
>> > at scala.collection.mutable.ArrayOps.map(ArrayOps.scala:34)
>> > at
>> >
>> >
>> kafka.server.KafkaRequestHandlers.handleMultiFetchRequest(KafkaRequestHandlers.scala:114)
>> > at
>> >
>> >
>> kafka.server.KafkaRequestHandlers$$anonfun$handlerFor$3.apply(KafkaRequestHandlers.scala:43)
>> > at
>> >
>> >
>> kafka.server.KafkaRequestHandlers$$anonfun$handlerFor$3.apply(KafkaRequestHandlers.scala:43)
>> > at kafka.network.Processor.handle(SocketServer.scala:268)
>> > at kafka.network.Processor.read(SocketServer.scala:291)
>> > at kafka.network.Processor.run(SocketServer.scala:202)
>> > at java.lang.Thread.run(Thread.java:680)
>> >
>> >
>> > Any idea on why the offset falls out of range? I'm using the 0.6 release
>> > version.
>> >
>> > Thanks,
>> > Florian
>> >
>>
>
>
>
> --
> Best regards,
>
> Florian
> http://twitter.com/flo <http://twitter.com/floleibert>
> http://flori.posterous.com/
>
>


-- 
Best regards,

Florian
http://twitter.com/flo <http://twitter.com/floleibert>
http://flori.posterous.com/

Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message