lucene-solr-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Moazzam Khan <>
Subject Re: Need help with field collapsing and out of memory error
Date Wed, 01 Sep 2010 22:13:20 GMT

If this is how you configure the field collapsing cache, then I don't
have it setup:




I didnt add that part to solrconfig.xml.

The way I setup field collapsing is I added this tag:

<searchComponent name="collapse"
class="org.apache.solr.handler.component.CollapseComponent" />

Then I modified the default request handler (for standard queries) with this:

 <requestHandler name="standard" class="solr.SearchHandler" default="true">
    <!-- default values for query parameters -->
     <lst name="defaults">
       <str name="echoParams">explicit</str>

     <arr name="components">

On Wed, Sep 1, 2010 at 4:11 PM, Jean-Sebastien Vachon
<> wrote:
> can you tell us what are your current settings regarding the fieldCollapseCache?
> I had similar issues with field collapsing and I found out that this cache was responsible
> most of the OOM exceptions.
> Reduce or even remove this cache from your configuration and it should help.
> On 2010-09-01, at 1:10 PM, Moazzam Khan wrote:
>> Hi guys,
>> I have about 20k documents in the Solr index (and there's a lot of
>> text in each of them). I have field collapsing enabled on a specific
>> field (AdvisorID).
>> The thing is if I have field collapsing enabled in the search request
>> I don't get correct count for the total number of records that
>> matched. It always says that the number of "rows" I asked to get back
>> is the number of total records it found.
>> And, when I run a query with search criteria *:* (to get the number of
>> total advisors in the index) solr runs of out memory and gives me an
>> error saying
>> SEVERE: java.lang.OutOfMemoryError: Java heap space
>>        at java.nio.CharBuffer.wrap(
>>        at java.nio.CharBuffer.wrap(
>>        at java.lang.StringCoding$StringDecoder.decode(
>>        at java.lang.StringCoding.decode(
>> This is going to be a huge problem later on when we index 50k
>> documents later on.
>> These are the options I am running Solr with :
>> java  -Xms2048M -Xmx2048M -XX:+UseConcMarkSweepGC -XX:PermSize=1024m
>> MaxPermSize=1024m    -jar  start.jar
>> Is there any way I can get the counts and not run out of memory?
>> Thanks in advance,
>> Moazzam

View raw message