lucene-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Michael McCandless (JIRA)" <>
Subject [jira] Commented: (LUCENE-555) Index Corruption
Date Thu, 26 Oct 2006 17:13:17 GMT
    [ ] 
Michael McCandless commented on LUCENE-555:

I would also add: I'm very surprised disk full corrupts the Lucene index.  I can't explain
it.  I'd like to explain it and fix it so if we can get to the root cause here that'd be wonderful.

The worst that should happen on disk full is those recent documents you had added but writer
has not yet committed, would be lost (but the rest of the index is intact).

It's only upon successfully writing the new segments that Lucene will write a new "segments"
file referring to the new segments.  After that, it removes the old segments.  Since it makes
these changes in the correct order, it should be the case that disk full exception never affects
the already written index.

Is it possible that disk full fails to throw an exception?  That would be spooky.

Note that I haven' t tested this myself; this is just based on my current understanding of
the Lucene source code.  Does anyone see a case now where disk full could corrupt the index?

> Index Corruption
> ----------------
>                 Key: LUCENE-555
>                 URL:
>             Project: Lucene - Java
>          Issue Type: Bug
>          Components: Index
>    Affects Versions: 1.9
>         Environment: Linux FC4, Java 1.4.9
>            Reporter: dan
>            Priority: Critical
> Index Corruption
> >>>>>>>>> output
> ../_aki.fnm (No such file or directory)
>         at Method)
>         at<init>(
>         at$Descriptor.<init>(
>         at<init>(
>         at
>         at org.apache.lucene.index.FieldInfos.<init>(
>         at org.apache.lucene.index.SegmentReader.initialize(
>         at org.apache.lucene.index.SegmentReader.get(
>         at org.apache.lucene.index.SegmentReader.get(
>         at org.apache.lucene.index.IndexWriter.mergeSegments(
>         at org.apache.lucene.index.IndexWriter.mergeSegments(
>         at org.apache.lucene.index.IndexWriter.optimize(
> >>>>>>>>> input
> - I open an index, I read, I write, I optimize, and eventually the above happens. The
index is unusable.
> - This has happened to me somewhere between 20 and 30 times now - on indexes of different
shapes and sizes.
> - I don't know the reason. But, the following requirement applies regardless.
> >>>>>>>>> requirement
> - Like all modern database programs, there has to be a way to repair an index. Period.

This message is automatically generated by JIRA.
If you think it was sent incorrectly contact one of the administrators:
For more information on JIRA, see:


To unsubscribe, e-mail:
For additional commands, e-mail:

View raw message