lucene-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Simon Willnauer (JIRA)" <>
Subject [jira] Updated: (LUCENE-1566) Large Lucene index can hit false OOM due to Sun JRE issue
Date Wed, 15 Jul 2009 16:02:14 GMT


Simon Willnauer updated LUCENE-1566:

    Attachment: LUCENE_1566_IndexInput.patch

@Mike: Thanks for your comments.
I did run my testcase to reproduce the OOM with some other directory implementation (SimpleFSDirectory
and NIOFSDirectory) and both of them suffer from the JVM bug. My testcase is the following.
1. Start JVM with -Xmx2500M (32bit) either 1.5 / 1.6 -- I hit the error with all of my VMs
2. index  250000000 simple documents and optimize the index once the last document is added.
3. open IndexReader with either a SimpleFSDirectory or NIOFSDirectory
4. Catch the error :)

I added a workaround to FSDirectory / NIOFSDirectory / SimpleFSDirectory as well as a testcase
to test the added code for correctness. The included testcase will not trigger the JVM bug
as I need such a specific setup to trigger it.

Any comments welcome.

> Large Lucene index can hit false OOM due to Sun JRE issue
> ---------------------------------------------------------
>                 Key: LUCENE-1566
>                 URL:
>             Project: Lucene - Java
>          Issue Type: Bug
>          Components: Index
>    Affects Versions: 2.4.1
>            Reporter: Michael McCandless
>            Assignee: Simon Willnauer
>            Priority: Minor
>             Fix For: 2.9
>         Attachments: LUCENE-1566.patch, LUCENE-1566.patch, LUCENE_1566_IndexInput.patch
> This is not a Lucene issue, but I want to open this so future google
> diggers can more easily find it.
> There's this nasty bug in Sun's JRE:
> The gist seems to be, if you try to read a large (eg 200 MB) number of
> bytes during a single call, you can incorrectly
> hit OOM.  Lucene does this, with norms, since we read in one byte per
> doc per field with norms, as a contiguous array of length maxDoc().
> The workaround was a custom patch to do large file reads as several
> smaller reads.
> Background here:

This message is automatically generated by JIRA.
You can reply to this email to add a comment to the issue online.

To unsubscribe, e-mail:
For additional commands, e-mail:

View raw message