lucene-solr-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Dmitry Kan <solrexp...@gmail.com>
Subject Re: unusually high 4.10.2 vs 4.3.1 RAM consumption
Date Tue, 10 Mar 2015 12:20:28 GMT
For the sake of the story completeness, just wanted to confirm these params
made a positive affect:

-Dsolr.solr.home=cores -Xmx12000m -Djava.awt.headless=true -XX:+UseParNewGC
-XX:+ExplicitGCInvokesConcurrent -XX:+UseConcMarkSweepGC
-XX:MaxTenuringThreshold=8 -XX:CMSInitiatingOccupancyFraction=40

This freed up couple dozen GBs on the solr server!

On Tue, Feb 17, 2015 at 1:47 PM, Dmitry Kan <solrexpert@gmail.com> wrote:

> Thanks Toke!
>
> Now I consistently see the saw-tooth pattern on two shards with new GC
> parameters, next I will try your suggestion.
>
> The current params are:
>
> -Xmx25600m -XX:+UseParNewGC -XX:+ExplicitGCInvokesConcurrent
> -XX:+UseConcMarkSweepGC -XX:MaxTenuringThreshold=8
> -XX:CMSInitiatingOccupancyFraction=40
>
> Dmitry
>
> On Tue, Feb 17, 2015 at 1:34 PM, Toke Eskildsen <te@statsbiblioteket.dk>
> wrote:
>
>> On Tue, 2015-02-17 at 11:05 +0100, Dmitry Kan wrote:
>> > Solr: 4.10.2 (high load, mass indexing)
>> > Java: 1.7.0_76 (Oracle)
>> > -Xmx25600m
>> >
>> >
>> > Solr: 4.3.1 (normal load, no mass indexing)
>> > Java: 1.7.0_11 (Oracle)
>> > -Xmx25600m
>> >
>> > The RAM consumption remained the same after the load has stopped on the
>> > 4.10.2 cluster. Manually collecting the memory on a 4.10.2 shard via
>> > jvisualvm dropped the used RAM from 8,5G to 0,5G. But the reserved RAM
>> as
>> > seen by top remained at 9G level.
>>
>> As the JVM does not free OS memory once allocated, top just shows
>> whatever peak it reached at some point. When you tell the JVM that it is
>> free to use 25GB, it makes a lot of sense to allocate a fair chunk of
>> that instead of garbage collecting if there is a period of high usage
>> (mass indexing for example).
>>
>> > What else could be the artifact of such a difference -- Solr or JVM?
>> Can it
>> > only be explained by the mass indexing? What is worrisome is that the
>> > 4.10.2 shard reserves 8x times it uses.
>>
>> If you set your Xmx to a lot less, the JVM will probably favour more
>> frequent garbage collections over extra heap allocation.
>>
>> - Toke Eskildsen, State and University Library, Denmark
>>
>>
>>
>
>
> --
> Dmitry Kan
> Luke Toolbox: http://github.com/DmitryKey/luke
> Blog: http://dmitrykan.blogspot.com
> Twitter: http://twitter.com/dmitrykan
> SemanticAnalyzer: www.semanticanalyzer.info
>
>


-- 
Dmitry Kan
Luke Toolbox: http://github.com/DmitryKey/luke
Blog: http://dmitrykan.blogspot.com
Twitter: http://twitter.com/dmitrykan
SemanticAnalyzer: www.semanticanalyzer.info

Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message