lucene-java-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Andrzej Bialecki>
Subject Re: Most efficient way to index 14M documents (out of memory/file handles)
Date Wed, 07 Jul 2004 09:57:53 GMT wrote:

> A colleague of mine found the fastest way to index was to use a RAMDirectory, letting
it grow
> to a pre-defined maximum size, then merging it to a new temporary file-based index to
> flush it. Repeat this, creating new directories for all the file based indexes then perform

> a merge into one index once all docs are indexed.
> I haven't managed to test this for myself but my colleague  says he noticed a 
> considerable speed up by merging once at the end with this approach so you may want
> to give it a try. (This was with Lucene 1.3)

I can confirm that this approach works quite well - I use it myself in 
some applications, both with Lucene 1.3 and 1.4. The disadvantage is of 
course that the memory consumption goes up, so you have to be careful to 
   cap the max size of RAMDirectory according to your max heap size limits.

Best regards,
Andrzej Bialecki

Software Architect, System Integration Specialist
CEN/ISSS EC Workshop, ECIMF project chair
EU FP6 E-Commerce Expert/Evaluator
FreeBSD developer (

To unsubscribe, e-mail:
For additional commands, e-mail:

View raw message