lucene-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Jessica Cheng (JIRA)" <>
Subject [jira] [Commented] (SOLR-5872) Eliminate overseer queue
Date Mon, 17 Mar 2014 17:27:47 GMT


Jessica Cheng commented on SOLR-5872:

Seems like everyone is worried about batching. I think it'd be interesting to add logging/
stats tracking and experiment on a large cluster to see how much batching is actually achieved.

There are a few things I worry about with the current implementation:
- With the overseer queues, each state update is 4+ zookeeper writes: 1 enqueue to stateUpdateQueue,
1 enqueue to workqueue, 1 state update write (potentially batched), 1 dequeue from stateUpdateQueue,
and 1 dequeue from workqueue--not to mention that each core going through a restart could
generate quite a few state updates (down, potentially isLeader switch, recovering, up) and
each node can contain multiple cores.
- Empirically, we have definitely seen the workqueue back up with lots of items during a node
bounce--but of course this can be due to some bug that's causing Potter to notice the slowness.
- If batching really is so important, there's no batching for external collection state updates.
- In a "normal" rolling bounce where instances are restarted one-by-one, in the same order
each time, the Overseer is killed at each instance restart, thus hindering the recovery process
by gating state transition. (Here there are workarounds by playing with bounce orders, etc.,
but I would argue that in any organization that would have a cluster large enough to worry
about this, there is most likely a system that governs the machines and normally does instance
1 to N bounces, and a general-purpose ops team that eschews service-/app-specific bounce instructions.)

With all that said, I would really appreciated it to have more background details about what
problems Mark and Sami has seen in the old implementation, and exactly what that old implementation

> Eliminate overseer queue 
> -------------------------
>                 Key: SOLR-5872
>                 URL:
>             Project: Solr
>          Issue Type: Improvement
>          Components: SolrCloud
>            Reporter: Noble Paul
>            Assignee: Noble Paul
> The overseer queue is one of the busiest points in the entire system. The raison d'ĂȘtre
of the queue is
>  * Provide batching of operations for the main clusterstate,json so that state updates
are minimized 
> * Avoid race conditions and ensure order
> Now , as we move the individual collection states out of the main clusterstate.json,
the batching is not useful anymore.
> Race conditions can easily be solved by using a compare and set in Zookeeper. 
> The proposed solution  is , whenever an operation is required to be performed on the
clusterstate, the same thread (and of course the same JVM)
>  # read the fresh state and version of zk node  
>  # construct the new state 
>  # perform a compare and set
>  # if compare and set fails go to step 1
> This should be limited to all operations performed on external collections because batching
would be required for others 

This message was sent by Atlassian JIRA

To unsubscribe, e-mail:
For additional commands, e-mail:

View raw message