lucene-java-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Erick Erickson <>
Subject Re: Caused by: java.lang.OutOfMemoryError: Map failed
Date Sat, 08 Nov 2014 00:20:38 GMT
bq: Our server runs many hundreds (soon to be thousands) of indexes

This is actually kind of scary. How do you expect to fit "many
thousands" of indexes into
memory? Raising per-process virtual memory to unlimited still doesn't handle the
amount of RAM the Solr process needs. It holds things like caches,
(top-level and
per-segment), sort lists, all that. How many G of indexes are we
talking here? Note
that this is not a great guide to RAM requirements, but I'm just
trying to get a handle
on the scale you're at. You're not, for instance, going to handle
terabyte-scale indexes
on a single machine satisfactorily IMO.

If your usage pattern is a user signs on, works with their index for a
while then
signs off, you might get some joy out of the LotsOfCores option. That said, this
option has NOT been validated on cloud setups, where I expect it'll
have problems.


On Fri, Nov 7, 2014 at 2:24 PM, Uwe Schindler <> wrote:
> Hi,
>> That error can also be thrown when the number of open files exceeds the
>> given limit. "OutOfMemory" should really have been named
>> "OutOfResources".
> This was changed already. Lucene no longer prints OOM (it removes the OOM from stack
trace). It also adds useful information. So I think the version of Lucene that produced this
exception is older (before 4.9):
> Uwe
> ---------------------------------------------------------------------
> To unsubscribe, e-mail:
> For additional commands, e-mail:

To unsubscribe, e-mail:
For additional commands, e-mail:

View raw message