lucene-solr-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Duncan Irvine <>
Subject Re: Replication happening before replicateAfter event
Date Wed, 12 Dec 2012 12:09:30 GMT
Hi Erick,
  Thanks for replying.  On the subject of commit vs optimize: for the
moment I'm actually replacing the entire index each time beginning
with a delete *:*, so I think doing an optimize is actually ok, as it
is essentially a new index anyway.  Ultimately, I think I'll want to
be doing smaller updates as well, and will have to change the policy
to replicate on commit as you suggest, but I'll need to work on the
data import side to be able to do this safely - if I'm still doing a
bulk replace as well, I'll need to carefully consider how it would

Turns out my problem wasn't a problem anyway though, but an edge-case
on initial startup. I don't think that replication actually works as
you describe though - the slaves simply poll the master's indexversion
(via e.g. http://masterhost:8983/solr/collection1/replication?command=indexversion)
asking the master what is the latest replicable commit, and the
replication events actually fire on the master.
The ReplicationHandler stores the value for indexversion in a field
called "indexCommitPoint", when your replicateAfter event fires, this
value is updated to the current index version.  However, because I was
using a completely fresh index, there was no current commit at startup
and I had not yet actually committed anything. Consequently, the
indexCommitPoint was null.  In this case, solr falls back on returning
the current state of the IndexDeletionPolicy - i.e. the latest soft
commit. Thus, every time the slaves polled the master they saw a new
version of the index and pulled it down. Once a commit or optimize
happens though, the master replication handler is notified of the
event and updates its indexCommitPoint, then subsequent calls to
indexversion return that static point meaning that the slaves see a
constant indexversion until the next commit; even if the
IndexDeletionPolicy is now actually ahead of this.
I don't consider this a bug as such, maybe a documentation bug, but I
think it actually makes sense for the initial index to replicate as
much as possible as soon as possible until you actually commit - it
was empty after all.  The only real "bug" is that the slave is serving
a later version than the master while this scenario is playing out.
But that's down to the Searcher on the master waiting for the commit
to happen, whereas the slaves see commits happening after each
replication.  Perhaps the ReplicationHandler should query the current
Index Searcher on master for the indexCommitPoint rather than the
IndexDeletionPolicy in this fallback mode?

It just worried me a bit as, since I'm starting out with a delete *:*,
I didn't want my slaves to suddenly empty out. Turns out I needn't
have worried - it's ticking over in production now without a hitch :).


On 1 Dec 2012, at 20:13, Erick Erickson <> wrote:

> First comment: You probably don't need to optimize. Despite its name, it
> rarely makes a difference and has several downsides, particularly it'll make
> replication replicate the entire index rather than just the changed
> segments.
> Optimize purges leftover data from docs that have been deleted, which will
> happen anyway on segment merges.
> But your problem isn't really a problem I don't think. I think you're
> confusing
> special events and polling. When you set these properties:
> "replicateAfter" "startup" and "optimize", you're really telling the slaves
> to update when any of them fire _in addition to_ when any replication that
> happens due to polling. So when you optimize, a couple of thing happen.
> 1> all unclosed segments are closed.
> 2> segments are merged.
> If the poll happens between 1 and 2, you'll get an index replication. Then
> you'll get another after the optimize.
> Ditto on autocommits. An auto commit closes the open segments. As soon
> as a poll sees that, the new segments are pulled down.
> The intent is for polling to pull down all changes it can every time, that's
> just the way it's designed.
> So you have a couple of choices:
> 1> use the HTTP api to disable replication, then enable it when you want.
> 2> turn off autocommit and don't commit during indexing at all until the
> very end. No commit ==  no replication.
> 3> but even if you do <2>, you still might get a replication after commit
> and after optimize. If you insist on optimizing, you're probably stuck with
> <1>. But I'd really think twice about the optimize bit.
> Best
> Erick
> On Fri, Nov 30, 2012 at 7:25 AM, Duncan Irvine <>wrote:
>> Hi All,
>> I'm a bit new to the whole solr world and am having a slight problem with
>> replication.  I'm attempting to configure a master/slave scenario with bulk
>> updates happening periodically. I'd like to insert a large batch of docs to
>> the master, then invoke an optimize and have it only then replicate to the
>> slave.
>> At present I can create the master index, which seems to go to plan.
>> Watching the updateHandler, I see records being added, indexed and
>> auto-committed every so often.  If I query the master while I am inserting,
>> and auto-commits have happened I see 0 records.  Then, when I commit at the
>> end, they all appear at once.  This is as I'd expect.
>> What doesn't seem to be working right is that I've configured replication
>> to "replicateAfter" "startup" and "optimize" with a pollInterval of 60s;
>> however the slave is replicating and serving the "uncommitted" data
>> (although presumably post-auto-commit).
>> According to my master, I have:
>> Version: 0
>> Gen: 1
>> Size: 1.53GB
>> replicateAfter: optimize, startup
>> And, at present, my slave says:
>> Master:
>> Version: 0
>> Gen: 1
>> Size: 1.53GB
>> Slave:
>> Version: 1354275651817
>> Gen: 52
>> Size: 1.39GB
>> Which is a bit odd.
>> If I query the slave, I get results and as the slave polls I gradually get
>> more and more.
>> Obviously, I can disable polling and enable it programmatically once I'm
>> ready, but I was hoping to avoid that.
>> Does anyone have any thoughts?
>> Cheers,
>> Duncan.

View raw message