lucene-solr-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Toke Eskildsen>
Subject Re: Increasing filterCache size and Java Heap size
Date Wed, 17 Aug 2016 09:40:30 GMT
On Wed, 2016-08-17 at 11:02 +0800, Zheng Lin Edwin Yeo wrote:
> Would like to check, do I need to increase my Java Heap size for
> Solr, if I plan to increase my filterCache size in solrconfig.xml?
> I'm using Solr 6.1.0

It _seems_ that you can specify a limit in megabytes when using
LRUCache in Solr 5.2+:

The documentation only mentions it for queryResultCache, but I do not
know if that is intentional (i.e. it does not work for filterCache) or
a shortcoming of the documentation:

If it does work for filterCache too (using LRUCache, I guess), then
that would be a much better way of limiting cache size than the highly
insufficient count-based limiter.

I say "highly insufficient" because filter cache entries are not of
equal size. With small sets they are stored as sparse, using a
relatively small amount of memory. For larger sets they are stored as
bitmaps, taking up ~1K + maxdoc/8 bytes as Erick describes.

So a fixed upper limit measured in counts needs to be adjusted to worst
case, meaning maxdoc/8, to ensure stability. In reality most of the
filter cache entries are small, meaning that there is plenty of heap
not being used. This leads people to over-allocate the max size for the
filterCache (very understandable) , resulting in setups that are only
stable as long as there are not too many large filter sets stores.
Leaving it to chance really.

I would prefer the count-based limit to be deprecated for the
filterCache, or at least warned against, in favour of memory-based.

- Toke Eskildsen, State and University Library, Denmark

View raw message