lucene-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Michael McCandless (JIRA)" <>
Subject [jira] Updated: (LUCENE-702) Disk full during addIndexes(Directory[]) can corrupt index
Date Mon, 11 Dec 2006 17:10:22 GMT
     [ ]

Michael McCandless updated LUCENE-702:

    Attachment: LUCENE-702.patch

I've attached a patch with changes as described below.  I will commit
in a few days if no one objects!

All unit tests pass.

This patch fixes addIndexes to 1) not corrupt the index on exception,
2) be transactional (either all or nothing was actually added), and 3)
to leave the writer instance in an consistent state (meaning, on an
exception, the segmentInfos state is restored to its starting point,
so that it matches what's actually in the index, and any unreferenced,
presumably partially written, files in the index are removed).

I've also fixed IndexWriter.mergeSegments() and IndexReader.commit()
to keep the instance "consistent" on exception as well (ie, even when
not being called from addIndexes).  Meaning, on an exception, any
changes to segmentInfos state are rolled back, any opened readers (in
the merger) are closed, and non-committed files are removed.

I've added two unit tests (one for IndexWriter.addIndexes* and one for
IndexReader.close) that expose these various cases as failures in the
current Lucene sources, which then pass with this patch.

These unit tests use a new handy MockRAMDirectory that can simulate
disk full, inject random IOExceptions, measures peak disk usage, etc.
Here is the summary of the approach:

  * Added private methods startTransaction(), rollbackTransaction()
    and commitTransaction() to IndexWriter.  During a transaction
    (inTransaction = true), I block changes to the index that alter
    how it was at the start of the transaction: we don't write any new
    segments_N files (but, do update the segmentInfos in memory, and
    do write / merge new segment files); we are not allowed to delete
    any segment files that were in the index at the start (added
    "protectedSegments" to track this during a transaction).

  * For the addIndexes methods, I first call startTransaction(), then
    do the actual work in a try block, and in a finally block then
    call either commitTransaction() or rollbackTransaction().

  * Fixed mergeSegments to respect whether it's in a transaction (and
    not commit/delete).  Also, fixed this method so that if it's not
    in a transaction (ie, being called by optimize or
    maybeMergeSegments) it still (in a try/finally) leaves
    segmentInfos "consistent" on hitting an exception, and removes any
    partial files that had been written.

  * Fixed IndexReader.commit with similar rollback logic to reset
    internal state if commit has failed.

It is actually possible to get an IOException from
addIndexes(Directory[]) yet see that all docs were in fact added. This
happens when the disk full is hit in building the CFS file on the
final optimize.  In this case, the index is consistent, but that
segment will remain in non-CFS format and will show all docs as added.

Various other small changes:

  * Changed RAMDirectory, and its createOutput method, to be
    non-final.  Also changed private -> protected for some of its
    instance variables.

  * Created MockRAMDirectory subclass.

  * Moved the RAMDirectory.getRecomputedSizeInBytes() into

  * Changed IndexFileDeleter to use a HashSet (instead of Vector) to
    track pending files since with these changes the same file can be
    added to the pending set many times.

  * Added some javadocs around temporary disk usage by these methods
    (this came up on java-user recently).  Also included a check in
    one of the unit tests to assert this.

  * In SegmentInfos, separately track (in memory only) which
    generation we will use for writing vs which generation was last
    successfully read.  These are almost always the same, but can
    differ when a commit failed while writing the next segments_N

  * Changed package protected IndexFileDeleter.commitPendingFiles() to
    actually delete the files (previously you also had to separately
    call deleteFiles).

  * Added SegmentInfos.getNextSegmentFileName()

  * Added SegmentInfos.clone() to also copy the contents of the vector
    (ie, each SegmentInfo).

  * Added SegmentInfo.reset(), which is used to "copy back" a previous
    clone of a SegmentInfo (IndexReader uses this to rollback).

  * Added package protected SegmentReader.getSegmentName()

  * Fixed a small bug in IndexFileDeleter that was failing to remove
    un-referenced CFS file on a segment that wasn't successfully
    converted to a CFS file (and added case to TestIndexFileDelter to
    expose the bug first).

  * Fixed a few minor typos.

> Disk full during addIndexes(Directory[]) can corrupt index
> ----------------------------------------------------------
>                 Key: LUCENE-702
>                 URL:
>             Project: Lucene - Java
>          Issue Type: Bug
>          Components: Index
>    Affects Versions: 2.1
>            Reporter: Michael McCandless
>         Assigned To: Michael McCandless
>         Attachments: LUCENE-702.patch
> This is a spinoff of LUCENE-555
> If the disk fills up during this call then the committed segments file can reference
segments that were not written.  Then the whole index becomes unusable.
> Does anyone know of any other cases where disk full could corrupt the index?
> I think disk full should worse lose the documents that were "in flight" at the time.
 It shouldn't corrupt the index.

This message is automatically generated by JIRA.
If you think it was sent incorrectly contact one of the administrators:
For more information on JIRA, see:


To unsubscribe, e-mail:
For additional commands, e-mail:

View raw message