lucene-solr-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Robert Petersen" <>
Subject RE: OOM on uninvert field request
Date Wed, 30 Jun 2010 22:19:11 GMT
Most of these hundreds of facet fields have tens of values but a couple have thousands, is
thousands of different values too many to do enum or is that still ok?  If so I could apply
it carte blanche to the whole field...

-----Original Message-----
From: [] On Behalf Of Yonik Seeley
Sent: Wednesday, June 30, 2010 1:38 PM
Subject: Re: OOM on uninvert field request

On Tue, Jun 29, 2010 at 7:32 PM, Robert Petersen <> wrote:
> Hello I am trying to find the right max and min settings for Java 1.6 on 20GB index with
8 million docs, running 1.6_018 JVM with solr 1.4, and am currently have java set to an even
4GB (export JAVA_OPTS="-Xmx4096m -Xms4096m") for both min and max which is doing pretty well
but occasionally still getting the below OOM errors.  We're running on dual quad core xeons
with 16GB memory installed.  I've been getting the below OOM exceptions still though.
> Is the memsize mentioned in the INFO for the uninvert in bytes? is memSize=29604020 mean


> We have a few hundred of these fields and they contain ints used as IDs, and so I guess
could they eat all the memory to uninvert them all after we apply load and enough queries
are performed.  Does the field type matter, would int be better than string if these are
lookup ids sparsely populated across the index?

No, using UnInvertedField faceting, the fieldType won't matter much at
all for the space it takes up.

The key here is that it looks like the number of unique terms in these
fields is low - you would probably do much better with
facet.method=enum (which iterates over terms rather than documents).


View raw message