kafka-users mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Bello, Bob" <Bob.Be...@dish.com>
Subject Possible corrupted index (Kafka 0.8)
Date Tue, 20 Aug 2013 19:37:59 GMT
Hello Kafka Club,

We are running a July 29th git pull of 0.8 Kafka. Linux Sun JDK1.7.0_25 64bit

We have a what appears to be a corrupted index for log file. This has occurred on a low volume
topic on a single partition:

-          The leader Kafka broker thinks this topic is at offset: 1808

-          The replication-offset-checkpoint file says that offset is 1808:     rain-burn-in
75 1808

-          The replica Kafka broker has a check point offset of:  1539

It appears that the replication breaks which causes NIO bandwidth of 100-200MB/s trying to
replicate this topic/partition.

If I use the simple console consumer to consume this topic/partition, I get consume up to
offset 1539>

JAVA_HOME=~/jdk1.7.0_25 ./kafka-simple-consumer-shell.sh --broker-list tm1-kafkabroker101:9092
--partition 75 --print-offsets --topic rain-burn-in --skip-message-on-error | grep 'next offset'


next offset = 1535
next offset = 1536
next offset = 1537
next offset = 1538
next offset = 1539

Then the simple consumer just stops (does not crash, but appears stuck).

If I tell the simple consumer to start offset 1541, then the simple console consumer can continue
to consume the messages until it reaches the most current offset.

JAVA_HOME=~/jdk1.7.0_25 ./kafka-simple-consumer-shell.sh --broker-list tm1-kafkabroker101:9092
--partition 75 --print-offsets --topic rain-burn-in --skip-message-on-error --offset 1541
| grep 'next offset'

next offset = 1800
next offset = 1801
next offset = 1802
next offset = 1803
next offset = 1804
next offset = 1805
next offset = 1806
next offset = 1807
next offset = 1808

>From the Kafka server.log file, I found the following errors:

2013-08-20 11:15:55 ERROR server.KafkaApis - [KafkaApi-1] Error when processing fetch request
for partition [rain-burn-in,75] offset 1047030 from consumer with correlation id 0
kafka.common.OffsetOutOfRangeException: Request for offset 1047030 but we only have log segments
in the range 0 to 1801.
        at kafka.log.Log.read(Unknown Source)
        at kafka.server.KafkaApis.kafka$server$KafkaApis$$readMessageSet(Unknown Source)
        at kafka.server.KafkaApis$$anonfun$kafka$server$KafkaApis$$readMessageSets$1.apply(Unknown
        at kafka.server.KafkaApis$$anonfun$kafka$server$KafkaApis$$readMessageSets$1.apply(Unknown
        at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
        at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
        at scala.collection.immutable.Map$Map1.foreach(Map.scala:109)
        at scala.collection.TraversableLike$class.map(TraversableLike.scala:244)
        at scala.collection.AbstractTraversable.map(Traversable.scala:105)
        at kafka.server.KafkaApis.kafka$server$KafkaApis$$readMessageSets(Unknown Source)
        at kafka.server.KafkaApis.handleFetchRequest(Unknown Source)
        at kafka.server.KafkaApis.handle(Unknown Source)
        at kafka.server.KafkaRequestHandler.run(Unknown Source)
        at java.lang.Thread.run(Thread.java:724)

Is there a suggested course of action short of removing the log and index? I was looking for
documentation on the log index format (perhaps to modify/fix) but did not find it anywhere.



  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message