kafka-users mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Manikumar Reddy <manikumar.re...@gmail.com>
Subject Re: Topic not getting deleted on 0.8.2.1
Date Thu, 28 Jul 2016 16:16:11 GMT
many delete topic functionality related issues got fixed in latest
versions. It highly recommend to move to latest version.
https://issues.apache.org/jira/browse/KAFKA-1757 fixes similar issue on
windows platform.

On Thu, Jul 28, 2016 at 3:40 PM, Ghosh, Prabal Kumar <
prabal.kumar.ghosh@sap.com> wrote:

> Hi Kafka Users,
>
> We are using kafka 0.8.2.1.We are not able to delete to any topic .
> We are using AdminUtils to create and delete topic.
> The topics get created successfully  with correct Leader and isr for each
> topic partitions.
> But when we try to delete topic using AdminUtils.deleteTopic(), it
> fails.The topic is indefinitely
> Marked for deletion.When I see the topic details, all partitions have
> Leader:-1 and isr:{}.
> For cleanup, I had to manually delete topic partitions from zk and kafka
> and restart both the process.
> But since, we are going to using kafka in production, the above clean up
> policy wont work.
>
> I have set delete.topic.enable=true.
>
> Any Suggestions.
>
> Controller Logs:
>
> 2016-07-28 20:52:16,670] DEBUG [Replica state machine on controller 0]:
> Are all replicas for topic jick deleted
> Map([Topic=jick,Partition=1,Replica=0] -> ReplicaDeletionIneligible,
> [Topic=jick,Partition=2,Replica=0] -> ReplicaDeletionStarted,
> [Topic=jick,Partition=0,Replica=0] -> ReplicaDeletionIneligible)
> (kafka.controller.ReplicaStateMachine)
> [2016-07-28 20:52:16,670] INFO [delete-topics-thread-0], Deletion for
> replicas 0 for partition [jick,2] of topic jick in progress
> (kafka.controller.TopicDeletionManager$DeleteTopicsThread)
> [2016-07-28 20:52:16,670] INFO [delete-topics-thread-0], Not retrying
> deletion of topic jick at this time since it is marked ineligible for
> deletion (kafka.controller.TopicDeletionManager$DeleteTopicsThread)
> [2016-07-28 20:52:16,670] DEBUG [Topic Deletion Manager 0], Waiting for
> signal to start or continue topic deletion
> (kafka.controller.TopicDeletionManager)
> [2016-07-28 20:52:16,670] DEBUG [Topic Deletion Manager 0], Delete topic
> callback invoked for StopReplicaResponse(36,Map([jick,2] -> -1),0)
> (kafka.controller.TopicDeletionManager)
> [2016-07-28 20:52:16,670] DEBUG [Topic Deletion Manager 0], Deletion
> failed for replicas [Topic=jick,Partition=2,Replica=0]. Halting deletion
> for topics Set(jick) (kafka.controller.TopicDeletionManager)
> [2016-07-28 20:52:16,670] INFO [Replica state machine on controller 0]:
> Invoking state change to ReplicaDeletionIneligible for replicas
> [Topic=jick,Partition=2,Replica=0] (kafka.controller.ReplicaStateMachine)
> [2016-07-28 20:52:16,670] INFO [Topic Deletion Manager 0], Halted deletion
> of topics jick (kafka.controller.TopicDeletionManager)
> [2016-07-28 20:52:16,670] INFO [delete-topics-thread-0], Handling deletion
> for topics HCP-BIGDATA,tick,hick,jick
> (kafka.controller.TopicDeletionManager$DeleteTopicsThread)
> [2016-07-28 20:52:16,670] DEBUG [Replica state machine on controller 0]:
> Are all replicas for topic HCP-BIGDATA deleted
> Map([Topic=HCP-BIGDATA,Partition=0,Replica=0] -> OfflineReplica,
> [Topic=HCP-BIGDATA,Partition=2,Replica=0] -> OfflineReplica,
> [Topic=HCP-BIGDATA,Partition=1,Replica=0] -> OfflineReplica)
> (kafka.controller.ReplicaStateMachine)
> [2016-07-28 20:52:16,670] INFO [delete-topics-thread-0], Not retrying
> deletion of topic HCP-BIGDATA at this time since it is marked ineligible
> for deletion (kafka.controller.TopicDeletionManager$DeleteTopicsThread)
> [2016-07-28 20:52:16,670] DEBUG [Replica state machine on controller 0]:
> Are all replicas for topic tick deleted
> Map([Topic=tick,Partition=2,Replica=0] -> OfflineReplica,
> [Topic=tick,Partition=1,Replica=0] -> OfflineReplica,
> [Topic=tick,Partition=0,Replica=0] -> OfflineReplica)
> (kafka.controller.ReplicaStateMachine)
> [2016-07-28 20:52:16,670] INFO [delete-topics-thread-0], Not retrying
> deletion of topic tick at this time since it is marked ineligible for
> deletion (kafka.controller.TopicDeletionManager$DeleteTopicsThread)
> [2016-07-28 20:52:16,670] DEBUG [Replica state machine on controller 0]:
> Are all replicas for topic hick deleted
> Map([Topic=hick,Partition=1,Replica=0] -> OfflineReplica,
> [Topic=hick,Partition=0,Replica=0] -> OfflineReplica,
> [Topic=hick,Partition=2,Replica=0] -> OfflineReplica)
> (kafka.controller.ReplicaStateMachine)
> [2016-07-28 20:52:16,670] INFO [delete-topics-thread-0], Not retrying
> deletion of topic hick at this time since it is marked ineligible for
> deletion (kafka.controller.TopicDeletionManager$DeleteTopicsThread)
> [2016-07-28 20:52:16,686] DEBUG [Replica state machine on controller 0]:
> Are all replicas for topic jick deleted
> Map([Topic=jick,Partition=1,Replica=0] -> ReplicaDeletionIneligible,
> [Topic=jick,Partition=2,Replica=0] -> ReplicaDeletionIneligible,
> [Topic=jick,Partition=0,Replica=0] -> ReplicaDeletionIneligible)
> (kafka.controller.ReplicaStateMachine)
> [2016-07-28 20:52:16,686] INFO [Topic Deletion Manager 0], Retrying delete
> topic for topic jick since replicas
> [Topic=jick,Partition=2,Replica=0],[Topic=jick,Partition=1,Replica=0],[Topic=jick,Partition=0,Replica=0]
> were not successfully deleted (kafka.controller.TopicDeletionManager)
> [2016-07-28 20:52:16,686] INFO [Replica state machine on controller 0]:
> Invoking state change to OfflineReplica for replicas
> [Topic=jick,Partition=2,Replica=0],[Topic=jick,Partition=1,Replica=0],[Topic=jick,Partition=0,Replica=0]
> (kafka.controller.ReplicaStateMachine)
> [2016-07-28 20:52:16,686] DEBUG [Controller 0]: Removing replica 0 from
> ISR  for partition [jick,2]. (kafka.controller.KafkaController)
> [2016-07-28 20:52:16,686] WARN [Controller 0]: Cannot remove replica 0
> from ISR of partition [jick,2] since it is not in the ISR. Leader = -1 ;
> ISR = List() (kafka.controller.KafkaController)
> [2016-07-28 20:52:16,686] DEBUG [Controller 0]: Removing replica 0 from
> ISR  for partition [jick,1]. (kafka.controller.KafkaController)
> [2016-07-28 20:52:16,702] WARN [Controller 0]: Cannot remove replica 0
> from ISR of partition [jick,1] since it is not in the ISR. Leader = -1 ;
> ISR = List() (kafka.controller.KafkaController)
> [2016-07-28 20:52:16,702] DEBUG [Controller 0]: Removing replica 0 from
> ISR  for partition [jick,0]. (kafka.controller.KafkaController)
> [2016-07-28 20:52:16,702] WARN [Controller 0]: Cannot remove replica 0
> from ISR of partition [jick,0] since it is not in the ISR. Leader = -1 ;
> ISR = List() (kafka.controller.KafkaController)
> [2016-07-28 20:52:16,702] DEBUG The stop replica request (delete = true)
> sent to broker 0 is  (kafka.controller.ControllerBrokerRequestBatch)
> [2016-07-28 20:52:16,702] DEBUG The stop replica request (delete = false)
> sent to broker 0 is
> [Topic=jick,Partition=2,Replica=0],[Topic=jick,Partition=1,Replica=0],[Topic=jick,Partition=0,Replica=0]
> (kafka.controller.ControllerBrokerRequestBatch)
> [2016-07-28 20:52:16,702] INFO [delete-topics-thread-0], Not retrying
> deletion of topic jick at this time since it is marked ineligible for
> deletion (kafka.controller.TopicDeletionManager$DeleteTopicsThread)
>
>
> Server Logs:
>
> [2016-07-28 20:51:05,203] INFO [ReplicaFetcherManager on broker 0] Removed
> fetcher for partitions [jick,0],[jick,1],[jick,2]
> (kafka.server.ReplicaFetcherManager)
> [2016-07-28 20:51:05,203] INFO Completed load of log jick-0 with log end
> offset 0 (kafka.log.Log)
> [2016-07-28 20:51:05,219] INFO Created log for partition [jick,0] in
> C:\tmp\kafka-logs with properties {segment.index.bytes -> 10485760,
> file.delete.delay.ms -> 60000, segment.bytes -> 1073741824, flush.ms ->
> 1000, delete.retention.ms -> 86400000, index.interval.bytes -> 4096,
> retention.bytes -> -1, min.insync.replicas -> 1, cleanup.policy -> delete,
> unclean.leader.election.enable -> true, segment.ms -> 604800000,
> max.message.bytes -> 1000012, flush.messages -> 9223372036854775807,
> min.cleanable.dirty.ratio -> 0.5, retention.ms -> 7200000,
> segment.jitter.ms -> 0}. (kafka.log.LogManager)
> [2016-07-28 20:51:05,219] WARN Partition [jick,0] on broker 0: No
> checkpointed highwatermark is found for partition [jick,0]
> (kafka.cluster.Partition)
> [2016-07-28 20:51:05,219] INFO Completed load of log jick-1 with log end
> offset 0 (kafka.log.Log)
> [2016-07-28 20:51:05,234] INFO Created log for partition [jick,1] in
> C:\tmp\kafka-logs with properties {segment.index.bytes -> 10485760,
> file.delete.delay.ms -> 60000, segment.bytes -> 1073741824, flush.ms ->
> 1000, delete.retention.ms -> 86400000, index.interval.bytes -> 4096,
> retention.bytes -> -1, min.insync.replicas -> 1, cleanup.policy -> delete,
> unclean.leader.election.enable -> true, segment.ms -> 604800000,
> max.message.bytes -> 1000012, flush.messages -> 9223372036854775807,
> min.cleanable.dirty.ratio -> 0.5, retention.ms -> 7200000,
> segment.jitter.ms -> 0}. (kafka.log.LogManager)
> [2016-07-28 20:51:05,234] WARN Partition [jick,1] on broker 0: No
> checkpointed highwatermark is found for partition [jick,1]
> (kafka.cluster.Partition)
> [2016-07-28 20:51:05,234] INFO Completed load of log jick-2 with log end
> offset 0 (kafka.log.Log)
> [2016-07-28 20:51:05,251] INFO Created log for partition [jick,2] in
> C:\tmp\kafka-logs with properties {segment.index.bytes -> 10485760,
> file.delete.delay.ms -> 60000, segment.bytes -> 1073741824, flush.ms ->
> 1000, delete.retention.ms -> 86400000, index.interval.bytes -> 4096,
> retention.bytes -> -1, min.insync.replicas -> 1, cleanup.policy -> delete,
> unclean.leader.election.enable -> true, segment.ms -> 604800000,
> max.message.bytes -> 1000012, flush.messages -> 9223372036854775807,
> min.cleanable.dirty.ratio -> 0.5, retention.ms -> 7200000,
> segment.jitter.ms -> 0}. (kafka.log.LogManager)
> [2016-07-28 20:51:05,251] WARN Partition [jick,2] on broker 0: No
> checkpointed highwatermark is found for partition [jick,2]
> (kafka.cluster.Partition)
> [2016-07-28 20:51:15,593] INFO Closing socket connection to /127.0.0.1.
> (kafka.network.Processor)
> [2016-07-28 20:51:15,593] INFO Closing socket connection to /127.0.0.1.
> (kafka.network.Processor)
> [2016-07-28 20:51:16,812] INFO Closing socket connection to /127.0.0.1.
> (kafka.network.Processor)
> [2016-07-28 20:51:17,062] ERROR Closing socket for /127.0.0.1 because of
> error (kafka.network.Processor)
> java.io.IOException: An established connection was aborted by the software
> in your host machine
>                 at sun.nio.ch.SocketDispatcher.write0(Native Method)
>                 at
> sun.nio.ch.SocketDispatcher.write(SocketDispatcher.java:51)
>                 at sun.nio.ch.IOUtil.writeFromNativeBuffer(IOUtil.java:93)
>                 at sun.nio.ch.IOUtil.write(IOUtil.java:65)
>                 at
> sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:471)
>                 at kafka.api.TopicDataSend.writeTo(FetchResponse.scala:123)
>                 at kafka.network.MultiSend.writeTo(Transmission.scala:101)
>                 at
> kafka.api.FetchResponseSend.writeTo(FetchResponse.scala:231)
>                 at kafka.network.Processor.write(SocketServer.scala:472)
>                 at kafka.network.Processor.run(SocketServer.scala:342)
>                 at java.lang.Thread.run(Thread.java:745)
> [2016-07-28 20:52:16,623] INFO [ReplicaFetcherManager on broker 0] Removed
> fetcher for partitions [jick,1] (kafka.server.ReplicaFetcherManager)
> [2016-07-28 20:52:16,623] INFO [ReplicaFetcherManager on broker 0] Removed
> fetcher for partitions [jick,0] (kafka.server.ReplicaFetcherManager)
> [2016-07-28 20:52:16,623] INFO [ReplicaFetcherManager on broker 0] Removed
> fetcher for partitions [jick,2] (kafka.server.ReplicaFetcherManager)
> [2016-07-28 20:52:16,623] INFO [ReplicaFetcherManager on broker 0] Removed
> fetcher for partitions [jick,1] (kafka.server.ReplicaFetcherManager)
> [2016-07-28 20:52:16,639] INFO Deleting index
> C:\tmp\kafka-logs\jick-1\00000000000000000000.index (kafka.log.OffsetIndex)
> [2016-07-28 20:52:16,639] ERROR [KafkaApi-0] error when handling request
> Name: StopReplicaRequest; Version: 0; CorrelationId: 36; ClientId: ;
> DeletePartitions: true; ControllerId: 0; ControllerEpoch: 3; Partitions:
> [jick,1] (kafka.server.KafkaApis)
> kafka.common.KafkaStorageException: Delete of index
> 00000000000000000000.index failed.
>                 at kafka.log.LogSegment.delete(LogSegment.scala:283)
>                 at kafka.log.Log$$anonfun$delete$1.apply(Log.scala:618)
>                 at kafka.log.Log$$anonfun$delete$1.apply(Log.scala:618)
>                 at
> scala.collection.Iterator$class.foreach(Iterator.scala:750)
>                 at
> scala.collection.AbstractIterator.foreach(Iterator.scala:1202)
>                 at
> scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
>                 at
> scala.collection.AbstractIterable.foreach(Iterable.scala:54)
>                 at kafka.log.Log.delete(Log.scala:618)
>                 at kafka.log.LogManager.deleteLog(LogManager.scala:378)
>                 at
> kafka.cluster.Partition$$anonfun$delete$1.apply$mcV$sp(Partition.scala:143)
>                 at
> kafka.cluster.Partition$$anonfun$delete$1.apply(Partition.scala:138)
>                 at
> kafka.cluster.Partition$$anonfun$delete$1.apply(Partition.scala:138)
>                 at kafka.utils.Utils$.inLock(Utils.scala:535)
>                 at kafka.utils.Utils$.inWriteLock(Utils.scala:543)
>                 at kafka.cluster.Partition.delete(Partition.scala:138)
>                 at
> kafka.server.ReplicaManager.stopReplica(ReplicaManager.scala:150)
>                 at
> kafka.server.ReplicaManager$$anonfun$stopReplicas$3.apply(ReplicaManager.scala:183)
>                 at
> kafka.server.ReplicaManager$$anonfun$stopReplicas$3.apply(ReplicaManager.scala:182)
>                 at
> scala.collection.immutable.Set$Set1.foreach(Set.scala:79)
>                 at
> kafka.server.ReplicaManager.stopReplicas(ReplicaManager.scala:182)
>                 at
> kafka.server.KafkaApis.handleStopReplicaRequest(KafkaApis.scala:135)
>                 at kafka.server.KafkaApis.handle(KafkaApis.scala:64)
>                 at
> kafka.server.KafkaRequestHandler.run(KafkaRequestHandler.scala:59)
>                 at java.lang.Thread.run(Thread.java:745)
> [2016-07-28 20:52:16,639] INFO [ReplicaFetcherManager on broker 0] Removed
> fetcher for partitions [jick,0] (kafka.server.ReplicaFetcherManager)
> [2016-07-28 20:52:16,655] INFO Deleting index
> C:\tmp\kafka-logs\jick-0\00000000000000000000.index (kafka.log.OffsetIndex)
> [2016-07-28 20:52:16,655] ERROR [KafkaApi-0] error when handling request
> Name: StopReplicaRequest; Version: 0; CorrelationId: 36; ClientId: ;
> DeletePartitions: true; ControllerId: 0; ControllerEpoch: 3; Partitions:
> [jick,0] (kafka.server.KafkaApis)
> kafka.common.KafkaStorageException: Delete of index
> 00000000000000000000.index failed.
>                 at kafka.log.LogSegment.delete(LogSegment.scala:283)
>                 at kafka.log.Log$$anonfun$delete$1.apply(Log.scala:618)
>                 at kafka.log.Log$$anonfun$delete$1.apply(Log.scala:618)
>                 at
> scala.collection.Iterator$class.foreach(Iterator.scala:750)
>                 at
> scala.collection.AbstractIterator.foreach(Iterator.scala:1202)
>                 at
> scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
>                 at
> scala.collection.AbstractIterable.foreach(Iterable.scala:54)
>                 at kafka.log.Log.delete(Log.scala:618)
>                 at kafka.log.LogManager.deleteLog(LogManager.scala:378)
>                 at
> kafka.cluster.Partition$$anonfun$delete$1.apply$mcV$sp(Partition.scala:143)
>                 at
> kafka.cluster.Partition$$anonfun$delete$1.apply(Partition.scala:138)
>                 at
> kafka.cluster.Partition$$anonfun$delete$1.apply(Partition.scala:138)
>                 at kafka.utils.Utils$.inLock(Utils.scala:535)
>                 at kafka.utils.Utils$.inWriteLock(Utils.scala:543)
>                 at kafka.cluster.Partition.delete(Partition.scala:138)
>                 at
> kafka.server.ReplicaManager.stopReplica(ReplicaManager.scala:150)
>                 at
> kafka.server.ReplicaManager$$anonfun$stopReplicas$3.apply(ReplicaManager.scala:183)
>                 at
> kafka.server.ReplicaManager$$anonfun$stopReplicas$3.apply(ReplicaManager.scala:182)
>                 at
> scala.collection.immutable.Set$Set1.foreach(Set.scala:79)
>                 at
> kafka.server.ReplicaManager.stopReplicas(ReplicaManager.scala:182)
>                 at
> kafka.server.KafkaApis.handleStopReplicaRequest(KafkaApis.scala:135)
>                 at kafka.server.KafkaApis.handle(KafkaApis.scala:64)
>                 at
> kafka.server.KafkaRequestHandler.run(KafkaRequestHandler.scala:59)
>                 at java.lang.Thread.run(Thread.java:745)
> [2016-07-28 20:52:16,655] INFO [ReplicaFetcherManager on broker 0] Removed
> fetcher for partitions [jick,2] (kafka.server.ReplicaFetcherManager)
> [2016-07-28 20:52:16,670] INFO Deleting index
> C:\tmp\kafka-logs\jick-2\00000000000000000000.index (kafka.log.OffsetIndex)
> [2016-07-28 20:52:16,670] ERROR [KafkaApi-0] error when handling request
> Name: StopReplicaRequest; Version: 0; CorrelationId: 36; ClientId: ;
> DeletePartitions: true; ControllerId: 0; ControllerEpoch: 3; Partitions:
> [jick,2] (kafka.server.KafkaApis)
> kafka.common.KafkaStorageException: Delete of index
> 00000000000000000000.index failed.
>                 at kafka.log.LogSegment.delete(LogSegment.scala:283)
>                 at kafka.log.Log$$anonfun$delete$1.apply(Log.scala:618)
>                 at kafka.log.Log$$anonfun$delete$1.apply(Log.scala:618)
>                 at
> scala.collection.Iterator$class.foreach(Iterator.scala:750)
>                 at
> scala.collection.AbstractIterator.foreach(Iterator.scala:1202)
>                 at
> scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
>                 at
> scala.collection.AbstractIterable.foreach(Iterable.scala:54)
>                 at kafka.log.Log.delete(Log.scala:618)
>                 at kafka.log.LogManager.deleteLog(LogManager.scala:378)
>                 at
> kafka.cluster.Partition$$anonfun$delete$1.apply$mcV$sp(Partition.scala:143)
>                 at
> kafka.cluster.Partition$$anonfun$delete$1.apply(Partition.scala:138)
>                 at
> kafka.cluster.Partition$$anonfun$delete$1.apply(Partition.scala:138)
>                 at kafka.utils.Utils$.inLock(Utils.scala:535)
>                 at kafka.utils.Utils$.inWriteLock(Utils.scala:543)
>                 at kafka.cluster.Partition.delete(Partition.scala:138)
>                 at
> kafka.server.ReplicaManager.stopReplica(ReplicaManager.scala:150)
>                 at
> kafka.server.ReplicaManager$$anonfun$stopReplicas$3.apply(ReplicaManager.scala:183)
>                 at
> kafka.server.ReplicaManager$$anonfun$stopReplicas$3.apply(ReplicaManager.scala:182)
>                 at
> scala.collection.immutable.Set$Set1.foreach(Set.scala:79)
>                 at
> kafka.server.ReplicaManager.stopReplicas(ReplicaManager.scala:182)
>                 at
> kafka.server.KafkaApis.handleStopReplicaRequest(KafkaApis.scala:135)
>                 at kafka.server.KafkaApis.handle(KafkaApis.scala:64)
>                 at
> kafka.server.KafkaRequestHandler.run(KafkaRequestHandler.scala:59)
>                 at java.lang.Thread.run(Thread.java:745)
> [2016-07-28 20:52:16,702] INFO [ReplicaFetcherManager on broker 0] Removed
> fetcher for partitions [jick,2] (kafka.server.ReplicaFetcherManager)
> [2016-07-28 20:52:16,717] INFO [ReplicaFetcherManager on broker 0] Removed
> fetcher for partitions [jick,1] (kafka.server.ReplicaFetcherManager)
> [2016-07-28 20:52:16,717] INFO [ReplicaFetcherManager on broker 0] Removed
> fetcher for partitions [jick,0] (kafka.server.ReplicaFetcherManager)
> [2016-07-28 20:54:00,200] INFO Rolled new log segment for 'nick-1' in 112
> ms. (kafka.log.Log)
> [2016-07-28 20:54:00,200] INFO Scheduling log segment 0 for log nick-1 for
> deletion. (kafka.log.Log)
> [2016-07-28 20:54:00,200] ERROR Uncaught exception in scheduled task
> 'kafka-log-retention' (kafka.utils.KafkaScheduler)
> kafka.common.KafkaStorageException: Failed to change the log file suffix
> from  to .deleted for log segment 0
>                 at
> kafka.log.LogSegment.changeFileSuffixes(LogSegment.scala:259)
>                 at
> kafka.log.Log.kafka$log$Log$$asyncDeleteSegment(Log.scala:729)
>                 at
> kafka.log.Log.kafka$log$Log$$deleteSegment(Log.scala:720)
>                 at
> kafka.log.Log$$anonfun$deleteOldSegments$1.apply(Log.scala:488)
>                 at
> kafka.log.Log$$anonfun$deleteOldSegments$1.apply(Log.scala:488)
>                 at scala.collection.immutable.List.foreach(List.scala:381)
>                 at kafka.log.Log.deleteOldSegments(Log.scala:488)
>                 at
> kafka.log.LogManager.kafka$log$LogManager$$cleanupExpiredSegments(LogManager.scala:411)
>                 at
> kafka.log.LogManager$$anonfun$cleanupLogs$3.apply(LogManager.scala:442)
>                 at
> kafka.log.LogManager$$anonfun$cleanupLogs$3.apply(LogManager.scala:440)
>                 at
> scala.collection.TraversableLike$WithFilter$$anonfun$foreach$1.apply(TraversableLike.scala:778)
>                 at
> scala.collection.Iterator$class.foreach(Iterator.scala:750)
>                 at
> scala.collection.AbstractIterator.foreach(Iterator.scala:1202)
>                 at
> scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
>                 at
> scala.collection.AbstractIterable.foreach(Iterable.scala:54)
>                 at
> scala.collection.TraversableLike$WithFilter.foreach(TraversableLike.scala:777)
>                 at kafka.log.LogManager.cleanupLogs(LogManager.scala:440)
>                 at
> kafka.log.LogManager$$anonfun$startup$1.apply$mcV$sp(LogManager.scala:182)
>                 at
> kafka.utils.KafkaScheduler$$anonfun$1.apply$mcV$sp(KafkaScheduler.scala:99)
>                 at kafka.utils.Utils$$anon$1.run(Utils.scala:54)
>                 at
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>                 at
> java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
>                 at
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
>                 at
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
>                 at
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>                 at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>                 at java.lang.Thread.run(Thread.java:745)
>
> Regards,
> Prabal K Ghosh
>

Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message