lucene-solr-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Doug Steigerwald <dsteigerw...@mcclatchyinteractive.com>
Subject High load when updating many cores
Date Wed, 02 Jul 2008 16:54:04 GMT
We're experiencing some high load on our Solr master server.  It  
currently has 30 cores and processes over 3 million updates per day.   
During most of the day the load on the master is low (0.5 to 2), but  
sometimes we get spikes in excess of 12 for hours at a time.

The only reason I can figure why this is happening because we're  
updating almost all of our cores during those times.  Usually during  
the day our sites update pretty randomly, but it seems like many of  
them send updates at the same time.

Over a 3 hour period where the load was ~12 we had only 156k updates.   
Usually a pretty light load when updating a single core through just a  
few producers.  It seems as though we're just getting updates from  
nearly all of our 30 cores at once, and something in the background is  
slowing down.

Here's some stats about our setup.

4x3.2GHz Xeon.  8GB RAM.  RHEL 5.1.  4GB max heap size for Solr.  Our  
build is a trunk build from January (using Lucene 2.3.0).  Java  
1.6.0_03-b05 (64bit).

Using Jetty started as:  'java -server -Xms1024m -Xmx4096m -jar  
start.jar'

We never query the master, but we do have caching enabled (same  
configs on master and slave).  autowarmCount is set to 0 for each core  
(they all use the same configs).  We autocommit every 5 seconds.

Any ideas what might cause the load to spike?  Could it be our caching  
even though we have autowarmCount set to 0?  Could it be that Solr is  
trying to merge a lot of indexes at once?

Maybe some garbage collection stuff?

Thanks.
Doug

Mime
View raw message