kafka-users mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Peter Davis <davi...@gmail.com>
Subject Re: Brokers are crash due to __consumer_offsets folder are deleted
Date Sat, 02 Jul 2016 04:15:04 GMT
Dear 黄杰斌:

I am guessing your operating system is configured to delete your /tmp directory when you restart
the server.

You will need to change the "log.dir" property in your broker's server.properties file to
someplace permanent.  Unfortunately, your data is lost unless you had a backup or had configured
replication. 

log.dir	The directory in which the log data is kept (supplemental for log.dirs property)	string
/tmp/kafka-logs		high


Dear Community: why does log.dir default under /tmp?  It is unsafe as a default.

-Peter


> On Jun 30, 2016, at 11:19 PM, 黄杰斌 <jben.huang@gmail.com> wrote:
> 
> Hi All,
> 
> Do you encounter below issue when using kafka_2.11-0.10.0.0?
> All brokers are crash due to __consumer_offsets folder are deleted.
> sample log:
> [2016-06-30 12:46:32,579] FATAL [Replica Manager on Broker 2]: Halting due
> to unrecoverable I/O error while handling produce request:
> (kafka.server.ReplicaManager)
> kafka.common.KafkaStorageException: I/O exception in append to log
> '__consumer_offsets-32'
>        at kafka.log.Log.append(Log.scala:329)
>        at kafka.cluster.Partition$$anonfun$11.apply(Partition.scala:443)
>        at kafka.cluster.Partition$$anonfun$11.apply(Partition.scala:429)
>        at kafka.utils.CoreUtils$.inLock(CoreUtils.scala:231)
>        at kafka.utils.CoreUtils$.inReadLock(CoreUtils.scala:237)
>        at
> kafka.cluster.Partition.appendMessagesToLeader(Partition.scala:429)
>        at
> kafka.server.ReplicaManager$$anonfun$appendToLocalLog$2.apply(ReplicaManager.scala:406)
>        at
> kafka.server.ReplicaManager$$anonfun$appendToLocalLog$2.apply(ReplicaManager.scala:392)
>        at
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
>        at
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
>        at scala.collection.immutable.Map$Map1.foreach(Map.scala:116)
>        at
> scala.collection.TraversableLike$class.map(TraversableLike.scala:234)
>        at scala.collection.AbstractTraversable.map(Traversable.scala:104)
>        at
> kafka.server.ReplicaManager.appendToLocalLog(ReplicaManager.scala:392)
>        at
> kafka.server.ReplicaManager.appendMessages(ReplicaManager.scala:328)
>        at
> kafka.coordinator.GroupMetadataManager.store(GroupMetadataManager.scala:232)
>        at
> kafka.coordinator.GroupCoordinator$$anonfun$handleCommitOffsets$9.apply(GroupCoordinator.scala:424)
>        at
> kafka.coordinator.GroupCoordinator$$anonfun$handleCommitOffsets$9.apply(GroupCoordinator.scala:424)
>        at scala.Option.foreach(Option.scala:257)
>        at
> kafka.coordinator.GroupCoordinator.handleCommitOffsets(GroupCoordinator.scala:424)
>        at
> kafka.server.KafkaApis.handleOffsetCommitRequest(KafkaApis.scala:310)
>        at kafka.server.KafkaApis.handle(KafkaApis.scala:84)
>        at
> kafka.server.KafkaRequestHandler.run(KafkaRequestHandler.scala:60)
>        at java.lang.Thread.run(Thread.java:745)
> Caused by: java.io.FileNotFoundException:
> /tmp/kafka2-logs/__consumer_offsets-32/00000000000000000000.index (No such
> file or directory)
>        at java.io.RandomAccessFile.open0(Native Method)
>        at java.io.RandomAccessFile.open(RandomAccessFile.java:316)
>        at java.io.RandomAccessFile.<init>(RandomAccessFile.java:243)
>        at
> kafka.log.OffsetIndex$$anonfun$resize$1.apply(OffsetIndex.scala:286)
>        at
> kafka.log.OffsetIndex$$anonfun$resize$1.apply(OffsetIndex.scala:285)
>        at kafka.utils.CoreUtils$.inLock(CoreUtils.scala:231)
>        at kafka.log.OffsetIndex.resize(OffsetIndex.scala:285)
>        at
> kafka.log.OffsetIndex$$anonfun$trimToValidSize$1.apply$mcV$sp(OffsetIndex.scala:274)
>        at
> kafka.log.OffsetIndex$$anonfun$trimToValidSize$1.apply(OffsetIndex.scala:274)
>        at
> kafka.log.OffsetIndex$$anonfun$trimToValidSize$1.apply(OffsetIndex.scala:274)
>        at kafka.utils.CoreUtils$.inLock(CoreUtils.scala:231)
>        at kafka.log.OffsetIndex.trimToValidSize(OffsetIndex.scala:273)
>        at kafka.log.Log.roll(Log.scala:655)
>        at kafka.log.Log.maybeRoll(Log.scala:630)
>        at kafka.log.Log.append(Log.scala:383)
>        ... 23 more
> 
> No one remove those folders, and topic __consumer_offsets is handled by
> broker, no one can remove this topic.
> Do you know why this happened? And how to avoid it?
> 
> Best Regards,
> Ben

Mime
  • Unnamed multipart/alternative (inline, 7-Bit, 0 bytes)
View raw message