lucene-solr-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Markus Jelsma <markus.jel...@openindex.io>
Subject RE: Optimize stalls at the same point
Date Tue, 25 Jul 2017 22:03:30 GMT
I agree, although we do have a NewRatio of two instead of three. One of our clusters takes
between 600 to 800 queries per second per replica. Lowering it but just one got us much more
performance. A note, the only cache is FilterCache and it has just a few dozen entries.
 
-----Original message-----
> From:Walter Underwood <wunder@wunderwood.org>
> Sent: Tuesday 25th July 2017 22:39
> To: solr-user@lucene.apache.org
> Subject: Re: Optimize stalls at the same point
> 
> I’ve never been fond of elaborate GC settings. I prefer to set a few things then let
it run. I know someone who’s worked on garbage collectors for thirty years. I don’t second
guess him. 
> 
> From watching GC performance under a load benchmark (CMS/ParNew) with Solr 4.x, I increased
the new space. Individual requests make a lot of allocations that are garbage at the end of
the request. All of those need to come from the new space. If new space is not big enough,
they’ll be allocated from tenured space. I settled on an 8G heap with 2G of new space. These
are the options (Java 7):
> 
> export CATALINA_OPTS="$CATALINA_OPTS -d64"
> export CATALINA_OPTS="$CATALINA_OPTS -server"
> export CATALINA_OPTS="$CATALINA_OPTS -Xms8g"
> export CATALINA_OPTS="$CATALINA_OPTS -Xmx8g"
> export CATALINA_OPTS="$CATALINA_OPTS -XX:NewSize=2g"
> export CATALINA_OPTS="$CATALINA_OPTS -XX:+UseConcMarkSweepGC"
> export CATALINA_OPTS="$CATALINA_OPTS -XX:+UseParNewGC"
> export CATALINA_OPTS="$CATALINA_OPTS -XX:+ExplicitGCInvokesConcurrent"
> export CATALINA_OPTS="$CATALINA_OPTS -verbose:gc"
> export CATALINA_OPTS="$CATALINA_OPTS -XX:+PrintGCDetails"
> export CATALINA_OPTS="$CATALINA_OPTS -XX:+PrintGCTimeStamps"
> export CATALINA_OPTS="$CATALINA_OPTS -XX:-TraceClassUnloading"
> export CATALINA_OPTS="$CATALINA_OPTS -Xloggc:$CATALINA_HOME/logs/gc.log"
> export CATALINA_OPTS="$CATALINA_OPTS -XX:+HeapDumpOnOutOfMemoryError"
> export CATALINA_OPTS="$CATALINA_OPTS -XX:HeapDumpPath=$CATALINA_HOME/logs/“
> 
> Tenured space will rise slowly, mostly because of cache ejections and background merges,
I think. Cache ejections from an LRU cache will almost always be in tenured space.
> 
> For Java 8 and Solr 6.5.1, we are running the G1 collector and very happy with it. We
run the options recommended by Shawn Heisey on this list.
> 
> SOLR_HEAP=8g
> # Use G1 GC  -- wunder 2017-01-23
> # Settings from https://wiki.apache.org/solr/ShawnHeisey
> GC_TUNE=" \
> -XX:+UseG1GC \
> -XX:+ParallelRefProcEnabled \
> -XX:G1HeapRegionSize=8m \
> -XX:MaxGCPauseMillis=200 \
> -XX:+UseLargePages \
> -XX:+AggressiveOpts \
> “
> 
> Last week, I benchmarked the 4.x config handling 15,000 requests/minute with a 95th percentile
response time of 30 ms, using production logs.
> 
> wunder
> Walter Underwood
> wunder@wunderwood.org
> http://observer.wunderwood.org/  (my blog)
> 
> 
> > On Jul 25, 2017, at 1:24 PM, Markus Jelsma <markus.jelsma@openindex.io> wrote:
> > 
> > Upgrade to 6.x and get, in general, decent JVM settings. And decrease your heap,
having it so extremely large is detrimental at best.
> > 
> > Our shards can be 25 GB in size, but we run fine (apart from other problems recently
discovered) with a 900 MB heap, so you probably have a lot of room to spare. Your max heap
is over a 100 times larger than ours, your index just around 16 times. It should work with
less.
> > 
> > As a bonus, with a smaller heap, you can have much more index data in mapped memory.
> > 
> > Regards,
> > Markus
> > 
> > -----Original message-----
> >> From:David Hastings <hastings.recursive@gmail.com>
> >> Sent: Tuesday 25th July 2017 22:15
> >> To: solr-user@lucene.apache.org
> >> Subject: Re: Optimize stalls at the same point
> >> 
> >> it turned out that i think it was a large GC operation, as it has since
> >> resumed optimizing.  current java options are as follows for the indexing
> >> server (they are different for the search servers) if you have any
> >> suggestions as to changes I am more than happy to hear them, honestly they
> >> have just been passed down from one installation to the next ever since we
> >> used to use tomcat to host solr
> >> -server -Xss256k -d64 -Xmx100000m -Xms7000m-XX:NewRatio=3
> >> -XX:SurvivorRatio=4 -XX:TargetSurvivorRatio=90 -XX:MaxTenuringThreshold=8
> >> -XX:+UseConcMarkSweepGC -XX:+UseParNewGC -XX:ConcGCThreads=4
> >> -XX:ParallelGCThreads=8 -XX:+CMSScavengeBeforeRemark
> >> -XX:PretenureSizeThreshold=64m -XX:+UseCMSInitiatingOccupancyOnly
> >> -XX:CMSInitiatingOccupancyFraction=50 -XX:CMSMaxAbortablePrecleanTime=6000
> >> -XX:+CMSParallelRemarkEnabled -XX:+ParallelRefProcEnabled -verbose:gc
> >> -XX:+PrintHeapAtGC -XX:+PrintGCDetails -XX:+PrintGCDateStamps
> >> -XX:+PrintGCTimeStamps -XX:+PrintTenuringDistribution
> >> -XX:+PrintGCApplicationStoppedTime
> >> -Xloggc:XXXXXX/solr-5.2.1/server/logs/solr_gc.log
> >> 
> >> and for my live searchers i use:
> >> server Xss256k Xms50000m Xmx50000m XX:NewRatio=3 XX:SurvivorRatio=4
> >> XX:TargetSurvivorRatio=90 XX:MaxTenuringThreshold=8 XX:+UseConcMarkSweepGC
> >> XX:+UseParNewGC XX:ConcGCThreads=4 XX:ParallelGCThreads=8
> >> XX:+CMSScavengeBeforeRemark XX:PretenureSizeThreshold=64m
> >> XX:+UseCMSInitiatingOccupancyOnly XX:CMSInitiatingOccupancyFraction=50
> >> XX:CMSMaxAbortablePrecleanTime=6000 XX:+CMSParallelRemarkEnabled
> >> XX:+ParallelRefProcEnabled verbose:gc XX:+PrintHeapAtGC XX:+PrintGCDetails
> >> XX:+PrintGCDateStamps XX:+PrintGCTimeStamps XX:+PrintTenuringDistribution
> >> XX:+PrintGCApplicationStoppedTime Xloggc:/SSD2TB01/solr
> >> 5.2.1/server/logs/solr_gc.log
> >> 
> >> 
> >> 
> >> On Tue, Jul 25, 2017 at 4:02 PM, Walter Underwood <wunder@wunderwood.org>
> >> wrote:
> >> 
> >>> Are you sure you need a 100GB heap? The stall could be a major GC.
> >>> 
> >>> We run with an 8GB heap. We also run with Xmx equal to Xms, growing memory
> >>> to the max was really time-consuming after startup.
> >>> 
> >>> What version of Java? What GC options?
> >>> 
> >>> wunder
> >>> Walter Underwood
> >>> wunder@wunderwood.org
> >>> http://observer.wunderwood.org/  (my blog)
> >>> 
> >>> 
> >>>> On Jul 25, 2017, at 12:03 PM, David Hastings <
> >>> hastings.recursive@gmail.com> wrote:
> >>>> 
> >>>> I am trying to optimize a rather large index (417gb) because its sitting
> >>> at
> >>>> 28% deletions.  However when optimizing, it stops at exactly 492.24
GB
> >>>> every time.  When I restart solr it will fall back down to 417 gb, and
> >>>> again, if i send an optimize command, the exact same 492.24 GB and it
> >>> stops
> >>>> optimizing.  There is plenty of space on the drive, and im running it
> >>>> at -Xmx100000m -Xms7000m on a machine with 132gb of ram and 24 cores.
 I
> >>>> have never ran into this problem before but also never had the index
get
> >>>> this large.  Any ideas?
> >>>> (solr 5.2 btw)
> >>>> thanks,
> >>>> -Dave
> >>> 
> >>> 
> >> 
> 
> 

Mime
View raw message