lucene-solr-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Noble Paul <noble.p...@gmail.com>
Subject Re: Solr Autoscaling multi-AZ rules
Date Mon, 12 Feb 2018 20:17:56 GMT
>>Goal: No node should have more than 6 shards

This is not possible today

 {"replica": "<7", "node":"#ANY"} , means don't put more than 7
replicas of the collection (irrespective of the shards) in a given
node

what do you mean by distinct 'RF' ? I think we are screwing up the
terminologies a bit here

On Wed, Feb 7, 2018 at 1:38 PM, Jeff Wartes <jwartes@whitepages.com> wrote:
> I’ve been messing around with the Solr 7.2 autoscaling framework this week. Some things
seem trivial, but I’m also running into questions and issues. If anyone else has experience
with this stuff, I’d be glad to hear it. Specifically:
>
>
> Context:
> -One collection, consisting of 42 shards, where up to 6 shards can fit on a single node.
(which means 7 nodes per Replication Factor)
> -Three AZs, each with its own ip_2 value.
>
> Goals:
>
> Goal: Fully utilize available nodes.
> Cluster Preference: {“maximize”: "cores”}
>
> Goal: No node should have more than one replica of a given shard
> Rule: {"replica": "<2", "shard": "#EACH", "node": "#ANY"}
>
> Goal: No node should have more than 6 shards
> Rule: {"replica": "<7", "node":"#ANY"}
>
> Goal: Where possible, distinct RFs should each exist in an AZ.
> (Example1: I’d like 7 nodes with a complete RF in AZ 1 and 7 nodes with a complete
RF in AZ 2, and not end up with, say, both shard2 replicas in AZ 1)
> (Example2: If I have 14 nodes in AZ 1 and 7 in AZ 2, I should have two full RFs in AZ
1 and one in AZ 2)
> Rule: ???
>
> I could have multiple non-strict rules perhaps? Like:
> {"replica": "<2", "shard": "#EACH", "ip_2": "1", "strict":false}
> {"replica": "<3", "shard": "#EACH", "ip_2": "1", "strict":false}
> {"replica": "<4", "shard": "#EACH", "ip_2": "1", "strict":false}
> {"replica": "<2", "shard": "#EACH", "ip_2": "2", "strict":false}
> {"replica": "<3", "shard": "#EACH", "ip_2": "2", "strict":false}
> {"replica": "<4", "shard": "#EACH", "ip_2": "2", "strict":false}
> etc
> So having more than one RF in an AZ is a technical “violation”, but if placement
minimizes non-strict violations, replicas would tend to get placed correctly.
>
>
> Given a working set of rules, I’m still having trouble with two things:
>
>   1.  I’ve manually created the “.system” collection, as it didn’t seem to get
created automatically. However, autoscaling activity is not getting logged to it.
>   2.  I can’t seem to figure out how to scale up.
>      *   I’d presumed editing the collection’s “replicationFactor” would do the
trick, but it does not.
>      *   The “node-up” trigger will serve to replace lost replicas, but won’t otherwise
take advantage of additional capacity.
>
>                                                                i.      There’s a UTILIZENODE
command in 7.2, but it appears that’s still something you need to trigger manually.
>
> Anyone played with this stuff?



-- 
-----------------------------------------------------
Noble Paul

Mime
View raw message