lucene-solr-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Yonik Seeley <yo...@lucidimagination.com>
Subject Re: Solr Trunk Heap Space Issues
Date Fri, 02 Oct 2009 14:01:48 GMT
On Fri, Oct 2, 2009 at 9:54 AM, Jeff Newburn <jnewburn@zappos.com> wrote:
> Ah yes we do have some warming queries which would look like a search.  Did
> that side change enough to push up the memory limits where we would run out
> like this?

What does the warming request(s) look like, and what are the field
types for the fields referenced?

>  Also, would FastLRU cache make a difference?

It shouldn't.

-Yonik
http://www.lucidimagination.com


> Jeff Newburn
> Software Engineer, Zappos.com
> jnewburn@zappos.com - 702-943-7562
>
>
>> From: Yonik Seeley <yonik@lucidimagination.com>
>> Reply-To: <solr-user@lucene.apache.org>
>> Date: Fri, 2 Oct 2009 00:53:46 -0400
>> To: <solr-user@lucene.apache.org>
>> Subject: Re: Solr Trunk Heap Space Issues
>>
>> On Thu, Oct 1, 2009 at 8:45 PM, Jeffery Newburn <jnewburn@zappos.com> wrote:
>>> I loaded the jvm and started indexing. It is a test server so unless some
>>> errant query came in then no searching. Our instance has only 512mb but my
>>> concern is the obvious memory requirement leap since it worked before. What
>>> other data would be helpful with this?
>>
>> Interesting... not too much should have changed for memory
>> requirements on the indexing side.
>> TokenStreams are now reused (and hence cached) per thread... but that
>> normally wouldn't amount to much.
>>
>> There was recently another bug where compound file format was being
>> used regardless of the config settings... but I think that was fixed
>> on the 29th.
>>
>> Maybe you were already close to the limit required?
>> Also, your heap dump did show LRUCache taking up 170MB, and only
>> searches populate that (perhaps you have warming searches configured
>> on this server?)
>>
>> -Yonik
>> http://www.lucidimagination.com
>>
>>
>>
>>
>>
>>>
>>>
>>> On Oct 1, 2009, at 5:14 PM, "Mark Miller" <markrmiller@gmail.com> wrote:
>>>
>>>> Jeff Newburn wrote:
>>>>>
>>>>> Ok I was able to get a heap dump from the GC Limit error.
>>>>>
>>>>> 1 instance of LRUCache is taking 170mb
>>>>> 1 instance of SchemaIndex is taking 56Mb
>>>>> 4 instances of SynonymMap is taking 112mb
>>>>>
>>>>> There is no searching going on during this index update process.
>>>>>
>>>>> Any ideas what on earth is going on?  Like I said my May version did
this
>>>>> without any problems whatsoever.
>>>>>
>>>>>
>>>> Had any searching gone on though? Even if its not occurring during the
>>>> indexing, you will still have the data structure loaded if searches had
>>>> occurred.
>>>>
>>>> What heap size do you have - that doesn't look like much data to me ...
>>>>
>>>> --
>>>> - Mark
>>>>
>>>> http://www.lucidimagination.com
>>>>
>>>>
>>>>
>>>
>
>

Mime
View raw message