lucene-solr-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Bictor Man <bictor...@gmail.com>
Subject Re: drastic performance decrease with 20 cores
Date Tue, 27 Sep 2011 00:43:41 GMT
Hi guys,

thanks for your replies. indeed the filesystem caching seems to be the
difference. sadly I can't add more memory and the 6GB/20core combination
doesn't work. so I'll just try to tweak it as much as I can.

thanks a lot.


2011/9/26 Fran├žois Schiettecatte <fschiettecatte@gmail.com>

> You have not said how big your index is but I suspect that allocating 13GB
> for your 20 cores is starving the OS of memory for caching file data. Have
> you tried 6GB with 20 cores? I suspect you will see the same performance as
> 6GB & 10 cores.
>
> Generally it is better to allocate just enough memory to SOLR to run
> optimally rather than as much as possible. 'Just enough' depends as well.
> You will need to try out different allocations and see where the sweet spot
> is.
>
> Cheers
>
> Fran├žois
>
>
> On Sep 26, 2011, at 9:53 AM, Bictor Man wrote:
>
> > Hi everyone,
> >
> > Sorry if this issue has been discussed before, but I'm new to the list.
> >
> > I have a solr (3.4) instance running with 20 cores (around 4 million docs
> > each).
> > The instance has allocated 13GB in a 16GB RAM server. If I run several
> sets
> > of queries sequentially in each of the cores, the I/O access goes very
> high,
> > so does the system load, while the CPU percentage remains always low.
> > It takes almost 1 hour to complete the set of queries.
> >
> > If I stop solr and restart it with 6GB allocated and 10 cores, after a
> bit
> > the I/O access goes down and the CPU goes up, taking only around 5
> minutes
> > to complete all sets of queries.
> >
> > Meaning that for me is MUCH more performant having 2 solr instances
> running
> > with half the data and half the memory than a single instance will all
> the
> > data and memory.
> >
> > It would be even way faster to have 1 instance with half the
> cores/memory,
> > run the queues, shut it down, start a new instance and repeat the process
> > than having a big instance running everything.
> >
> > Furthermore, if I take the 20cores/13GB instance, unload 10 of the cores,
> > trigger the garbage collector and run the sets of queries again, the
> > behavior still remains slow taking like 30 minutes.
> >
> > am I missing something here? does solr change its caching policy
> depending
> > on the number of cores at startup or something similar?
> >
> > Any hints will be very appreciated.
> >
> > Thanks,
> > Victor
>
>

Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message