lucene-solr-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Tomás Fernández Löbbe <tomasflo...@gmail.com>
Subject Re: Solr Optimization Fail
Date Fri, 16 Dec 2011 12:29:45 GMT
Are you on Windows? There is a JVM bug that makes Solr keep the old files,
even if they are not used anymore. The files are going to be eventually
removed, but if you want them out of there immediately try optimizing
twice, the second optimize doesn't do much but it will remove the old files.

On Fri, Dec 16, 2011 at 9:10 AM, Juan Pablo Mora <juampa@informa.es> wrote:

> Maybe you are generating a snapshot of your index attached to the optimize
> ???
> Look for post-commit or post-optimize events in your solr-config.xml
>
> ________________________________________
> De: Rajani Maski [rajinimaski@gmail.com]
> Enviado el: viernes, 16 de diciembre de 2011 11:11
> Para: solr-user@lucene.apache.org
> Asunto: Solr Optimization Fail
>
> Hi,
>
>  When we do optimize, it actually reduces the data size right?
>
> I have index of size 6gb(5 million documents). Index is already created
> with commits for every 10000 documents.
>
> Now I was trying to do optimization with  http optimize command.   When i
> did that,  data size became - 12gb.  Why this might have happened?
>
> And can anyone please suggest me fix for it?
>
> Thanks
> Rajani
>

Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message