lucene-solr-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From 龚俊衡 <junheng.g...@icloud.com>
Subject Re: About solr recovery
Date Wed, 04 Mar 2015 05:56:43 GMT
sorry mail list reformat my email 





> On Mar 4, 2015, at 13:47, 龚俊衡 <junheng.gong@icloud.com> wrote:
> 
> Hi,  Erick
> 
> Thanks for you quick replay, 
> 
> we are using Solr 4.9.0 and use 4 Aliyun cloud instance with 4 core cpu 32G mem and 1G
SSD
> 
> shard distribute as:
> 
> we have 4 shard
> 
> Node
> shard1_0
> shard1_1
> shard2_0
> shard2_1
> prmsop01 10.173.225.147
> E
> 
> E
> 
> prmsop02 10.173.226.78
> 
> E
> 
> E
> prmsop03 10.173.225.163
> E
> 
> E
> 
> prmsop04 10.173.224.33
> 
> E
> 
> E
> 
> and each shard index size is 24G.
> 
> currently we insert 500 document per second.
> 
> Thanks.
> 
>> On Mar 4, 2015, at 12:21, Erick Erickson <erickerickson@gmail.com> wrote:
>> 
>> It's always important to tell us _what_ version of Solr you are
>> running. There have
>> been many improvements in this whole area, perhaps it's already fixed?
>> 
>> Best,
>> Erick
>> 
>> On Tue, Mar 3, 2015 at 6:20 PM, 龚俊衡 <junheng.gong@icloud.com> wrote:
>>> Hi,
>>> 
>>> I found when a replica recovering, one of cpu core (usually cpu0) will load 100%,
and then leader update will fail cause this replica can not response leader’s /update command
>>> 
>>> this will cause leader send other recovery to this replica then this replica
in a recover loop.
>>> 
>>> my question is it’s possible to avoid command process thread and recovery thread
running on different cpu core?
> 


Mime
View raw message