lucene-solr-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Erick Erickson <erickerick...@gmail.com>
Subject Re: Problem Commiting to Large Index
Date Mon, 05 Dec 2011 13:41:06 GMT
Some details please. Are you indexing and searching
on the same machine? How are you committing?
After every add? Via commitWithin? Via autocommit?

What version of Solr? Whatenvironment?

You might review:
http://wiki.apache.org/solr/UsingMailingLists

Best
Erick

On Fri, Dec 2, 2011 at 2:35 PM, Marty Trenouth <marty@peoplefinders.com> wrote:
> We are creating an index of about 500 - 600M records fairly small documents.
>
> We are currently @ 252M+ records and adding documents at a rate of about 2k - 3k per
second in multithreaded 1K batches sent from multiple servers.  Our commits started bombing
out with out of Memory Exceptions with 14G.  17G on the heap works, but begins to push memory
usage above the available memory.  Even when reducing the number of documents committed does
not seem to mitigate the out of memory issue.
>
> When looking at the JMX charts on memory usage it doesn't look like a memory leak is
present as the graph jumps up and down while maintaining a steady minimum within.  However
I do get the odd behavior that the memory spike maximums gradually increase during indexing,
which seems odd.
>
>

Mime
View raw message