lucene-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From <>
Subject SolrCloud issues
Date Mon, 01 Feb 2016 17:14:17 GMT

We are currently performing some benchmarks on Solr 5.4.0 and we hit some issues related to
SolrCloud and leading to recoveries and inconstancies.
Based on our tests, it seems that this version is less stable under pressure than our previously
installed 4.10.4 version.
We were able to mitigate the effects by increasing numRecordsToKeep in the update log and
limiting replication bandwidth.
But all problems were not resolved and more worrying it is more difficult to get back a running
For example we ended up with a situation where on a shard the leader is down and all replicas
are active.

We found a particular pattern that leads to a bad cluster state, described here:

There are also a lot of open issues (or resolved in version 5.5) related to SolrCloud / Zookeeper
/ Replications.

Here is a (non exhaustive) list I could gather from JIRA:


HdfsChaosMonkeyNothingIsSafeTest failures<>


CloudSolrStream and ParallelStream can choose replicas that are not active<>


A new replica should not become leader when all current replicas are down as it leads to data


ZooKeeper related SolrCloud problems<>


ConcurrentUpdateSolrServer hang in blockUntilFinished.<>

SOLR-8173<> CLONE - Leader recovery process
can select the wrong leader if all replicas for a shard are down and trying to recover as
well as lose updates that should have been recovered.<>

Try and prevent too many recovery requests from stacking up and clean up some faulty logic.<>


Solr nodes should go down based on configurable thresholds and not rely on resource exhaustion<>


Implement hash over all documents to check for shard synchronization<>

I wonder if all these issues could be treated in a general refactoring of this code instead
of individual patches for every issue.
I know that these issues are not easy to reproduce and debug and I'm not aware of all the
implications of this kind of work.
We are willing to contribute on this issues although our knowledge of Solr internal might
still be weak for such an important part of SolrCloud architecture.
We can provide logs and benchmarks that lead to inconsistencies and/or bad cluster states.
It appears with have a better behaviour when we have a 5 nodes zk cluster than a 3 nodes.
However there are no sign of any problems on ZK when we have these errors in Solr.


View raw message