lucene-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From mark harwood <>
Subject Re: Initially creating index throws out of memory
Date Mon, 11 Apr 2005 09:16:53 GMT
By default Lucene does not have a setting that allows
you to control memory usage directly in terms of bytes
of RAM. 
It does offer IndexWriter.setMaxBufferedDocs which
dictates how many documents are accumulated in RAM
(which is obviously fast) before the RAM is flushed to
disk. Setting this value is a bit of a guessing game
to work out how many of your documents will fit in the
RAM you have available.
An alternative is to write documents into a
RAMDirectory of your own and as you add, monitor its
size (code follows below). When the RAMDirectory
exceeds your chosen RAM limit you merge the RAM
directory with your file-based FSDirectory using
fileIndexWriter.addIndexes(new Directory[] { myRAMDir

This allows you to control RAM usage more precisely.
Here is the routine to monitor the size of your RAM
public int getRAMSize(RAMDirectory ramDir) throws
	String []segs=ramDir.list();
	int totalSize=0;
	for(int i=0;i<segs.length;i++)
	return totalSize;


--- Matthias Stoll <> wrote:
> Hi all
> I'm trying to create an index on about 40000
> documents. At 50% done the 
> system throws an out of memory exception. Running on
> an 1 Gig Xeon 
> workstation using WSAD (allready consumes 500 megs
> at startup). Is there 
> any way to prevent lucene from eating up memory?
> tx
> __
> Matthias Stoll
> hpi GmbH
> Application Development
> Am Limes Park 2
> D - 65843 Sulzbach/Ts.
> Web site:

Send instant messages to your online friends 

To unsubscribe, e-mail:
For additional commands, e-mail:

View raw message