kafka-users mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Sa Li <sal...@gmail.com>
Subject Re: no space left error
Date Tue, 06 Jan 2015 22:07:49 GMT
BTW, I found the the /kafka/logs also getting biger and bigger, like
controller.log and state-change.logs. should I launch a cron the clean them
up regularly or there is way to delete them regularly?

thanks

AL

On Tue, Jan 6, 2015 at 2:01 PM, Sa Li <salicn@gmail.com> wrote:

> Hi, All
>
> We fix the problem, I like to share the what the problem is in case
> someone come across the similar issues. We add the data drive for each node
> /dev/sdb1 , but specify the wrong path in server.properties, which means
> the data was written into the wrong drive /dev/sda2, quickly eat up all the
> space in sda2, now we change the path. The sdb1 has 15Tb, which allows us
> to store data for a while and will be deleted in 1/2 weeks as config
> mentioned.
>
> But I am kinda curious about David's comments,  "... after having tuned
> retention bytes or retention (time?) incorrectly. .."  How do you guys set
> log.retention.bytes?  I set log.retention.hours=336 (2 weeks), and should
> I set log.retention.bytes as default -1 or some other amount?
>
> thanks
>
> AL
>
> On Tue, Jan 6, 2015 at 12:43 PM, Sa Li <salicn@gmail.com> wrote:
>
>> Thanks the reply, the disk is not full:
>>
>> root@exemplary-birds:~# df -h
>> Filesystem      Size  Used Avail Use% Mounted on
>> /dev/sda2       133G  3.4G  123G   3% /
>> none            4.0K     0  4.0K   0% /sys/fs/cgroup
>> udev             32G  4.0K   32G   1% /dev
>> tmpfs           6.3G  764K  6.3G   1% /run
>> none            5.0M     0  5.0M   0% /run/lock
>> none             32G     0   32G   0% /run/shm
>> none            100M     0  100M   0% /run/user
>> /dev/sdb1        14T   15G   14T   1% /srv
>>
>> Neither the memory
>>
>> root@exemplary-birds:~# free
>>              total       used       free     shared    buffers     cached
>> Mem:      65963372    9698380   56264992        776     170668    7863812
>> -/+ buffers/cache:    1663900   64299472
>> Swap:       997372          0     997372
>>
>> thanks
>>
>>
>> On Tue, Jan 6, 2015 at 12:10 PM, David Birdsong <david.birdsong@gmail.com
>> > wrote:
>>
>>> I'm keen to hear about how to work one's way out of a filled partition
>>> since I've run into this many times after having tuned retention bytes or
>>> retention (time?) incorrectly. The proper path to resolving this isn't
>>> obvious based on my many harried searches through documentation.
>>>
>>> I often end up stopping the particular broker, picking an unlucky
>>> topic/partition, deleting, modifying the any topics that consumed too
>>> much
>>> space by lowering their retention bytes, and restarting.
>>>
>>> On Tue, Jan 6, 2015 at 12:02 PM, Sa Li <salicn@gmail.com> wrote:
>>>
>>> > Continue this issue, when I restart the server, like
>>> > bin/kafka-server-start.sh config/server.properties
>>> >
>>> > it will fails to start the server, like
>>> >
>>> > [2015-01-06 20:00:55,441] FATAL Fatal error during KafkaServerStable
>>> > startup. Prepare to shutdown (kafka.server.KafkaServerStartable)
>>> > java.lang.InternalError: a fault occurred in a recent unsafe memory
>>> access
>>> > operation in compiled Java code
>>> >         at java.nio.HeapByteBuffer.<init>(HeapByteBuffer.java:57)
>>> >         at java.nio.ByteBuffer.allocate(ByteBuffer.java:331)
>>> >         at
>>> > kafka.log.FileMessageSet$$anon$1.makeNext(FileMessageSet.scala:188)
>>> >         at
>>> > kafka.log.FileMessageSet$$anon$1.makeNext(FileMessageSet.scala:165)
>>> >         at
>>> >
>>> kafka.utils.IteratorTemplate.maybeComputeNext(IteratorTemplate.scala:66)
>>> >         at
>>> kafka.utils.IteratorTemplate.hasNext(IteratorTemplate.scala:58)
>>> >         at kafka.log.LogSegment.recover(LogSegment.scala:165)
>>> >         at kafka.log.Log.recoverLog(Log.scala:179)
>>> >         at kafka.log.Log.loadSegments(Log.scala:155)
>>> >         at kafka.log.Log.<init>(Log.scala:64)
>>> >         at
>>> >
>>> >
>>> kafka.log.LogManager$$anonfun$loadLogs$1$$anonfun$apply$4.apply(LogManager.scala:118)
>>> >         at
>>> >
>>> >
>>> kafka.log.LogManager$$anonfun$loadLogs$1$$anonfun$apply$4.apply(LogManager.scala:113)
>>> >         at
>>> >
>>> >
>>> scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:33)
>>> >         at
>>> > scala.collection.mutable.ArrayOps$ofRef.foreach(ArrayOps.scala:105)
>>> >         at
>>> > kafka.log.LogManager$$anonfun$loadLogs$1.apply(LogManager.scala:113)
>>> >         at
>>> > kafka.log.LogManager$$anonfun$loadLogs$1.apply(LogManager.scala:105)
>>> >         at
>>> >
>>> >
>>> scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:33)
>>> >         at
>>> > scala.collection.mutable.WrappedArray.foreach(WrappedArray.scala:34)
>>> >         at kafka.log.LogManager.loadLogs(LogManager.scala:105)
>>> >         at kafka.log.LogManager.<init>(LogManager.scala:57)
>>> >         at
>>> kafka.server.KafkaServer.createLogManager(KafkaServer.scala:275)
>>> >         at kafka.server.KafkaServer.startup(KafkaServer.scala:72)
>>> >         at
>>> >
>>> kafka.server.KafkaServerStartable.startup(KafkaServerStartable.scala:34)
>>> >         at kafka.Kafka$.main(Kafka.scala:46)
>>> >         at kafka.Kafka.main(Kafka.scala)
>>> > [2015-01-06 20:00:55,443] INFO [Kafka Server 100], shutting down
>>> > (kafka.server.KafkaServer)
>>> > [2015-01-06 20:00:55,444] INFO Terminate ZkClient event thread.
>>> > (org.I0Itec.zkclient.ZkEventThread)
>>> > [2015-01-06 20:00:55,446] INFO Session: 0x684a5ed9da3a1a0f closed
>>> > (org.apache.zookeeper.ZooKeeper)
>>> > [2015-01-06 20:00:55,446] INFO EventThread shut down
>>> > (org.apache.zookeeper.ClientCnxn)
>>> > [2015-01-06 20:00:55,447] INFO [Kafka Server 100], shut down completed
>>> > (kafka.server.KafkaServer)
>>> > [2015-01-06 20:00:55,447] INFO [Kafka Server 100], shutting down
>>> > (kafka.server.KafkaServer)
>>> >
>>> > Any ideas
>>> >
>>> > On Tue, Jan 6, 2015 at 12:00 PM, Sa Li <salicn@gmail.com> wrote:
>>> >
>>> > > the complete error message:
>>> > >
>>> > > -su: cannot create temp file for here-document: No space left on
>>> device
>>> > > OpenJDK 64-Bit Server VM warning: Insufficient space for shared
>>> memory
>>> > > file:
>>> > >    /tmp/hsperfdata_root/19721
>>> > > Try using the -Djava.io.tmpdir= option to select an alternate temp
>>> > > location.
>>> > > [2015-01-06 19:50:49,244] FATAL  (kafka.Kafka$)
>>> > > java.io.FileNotFoundException: conf (No such file or directory)
>>> > >         at java.io.FileInputStream.open(Native Method)
>>> > >         at java.io.FileInputStream.<init>(FileInputStream.java:146)
>>> > >         at java.io.FileInputStream.<init>(FileInputStream.java:101)
>>> > >         at kafka.utils.Utils$.loadProps(Utils.scala:144)
>>> > >         at kafka.Kafka$.main(Kafka.scala:34)
>>> > >         at kafka.Kafka.main(Kafka.scala)
>>> > >
>>> > > On Tue, Jan 6, 2015 at 11:58 AM, Sa Li <salicn@gmail.com> wrote:
>>> > >
>>> > >>
>>> > >> Hi, All
>>> > >>
>>> > >> I am doing performance test on our new kafka production server,
but
>>> > after
>>> > >> sending some messages (even faked message by using
>>> > bin/kafka-run-class.sh
>>> > >> org.apache.kafka.clients.tools.ProducerPerformance), it comes out
>>> the
>>> > error
>>> > >> of connection, and shut down the brokers, after that, I see such
>>> errors,
>>> > >>
>>> > >> conf-su: cannot create temp file for here-document: No space left
on
>>> > >> device
>>> > >>
>>> > >> How can I fix it, I am concerning that will happen when we start
to
>>> > >> publish real messages in kafka, and should I create some cron to
>>> > regularly
>>> > >> clean certain directories?
>>> > >>
>>> > >> thanks
>>> > >>
>>> > >> --
>>> > >>
>>> > >> Alec Li
>>> > >>
>>> > >
>>> > >
>>> > >
>>> > > --
>>> > >
>>> > > Alec Li
>>> > >
>>> >
>>> >
>>> >
>>> > --
>>> >
>>> > Alec Li
>>> >
>>>
>>
>>
>>
>> --
>>
>> Alec Li
>>
>
>
>
> --
>
> Alec Li
>



-- 

Alec Li

Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message