lucene-solr-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Sandeep Dharembra <sandeep.dharem...@gmail.com>
Subject Re: All shards placed on the same node
Date Mon, 06 Apr 2020 01:49:08 GMT
Hey,

Please change the precision in cluster preference for core to 1 instead of
10 and then give a try.

With current settings, 2 nodes are not treated different till they have a
difference of 10 cores.

Thanks,


On Mon, Apr 6, 2020, 2:09 AM Kudrettin Güleryüz <kudrettin@gmail.com> wrote:

> Hi,
>
> Running 7.3.1 on an 8 node Solr cloud. Why would solr create all 6 shards
> on the same node? I don't want to restrict Solr to create up to x number of
> shards per node but creating all shards on the same node doesn't look right
> to me.
>
> Will Solr use all space on one node before using another one? Here is my
> autoscaling configuration:
>
> {
>   "cluster-preferences":[
>     {
>       "minimize":"cores",
>       "precision":10},
>     {
>       "precision":100,
>       "maximize":"freedisk"},
>     {
>       "minimize":"sysLoadAvg",
>       "precision":3}],
>   "cluster-policy":[{
>       "freedisk":"<10",
>       "replica":"0",
>       "strict":"true"}],
>   "triggers":{".auto_add_replicas":{
>       "name":".auto_add_replicas",
>       "event":"nodeLost",
>       "waitFor":120,
>       "actions":[
>         {
>           "name":"auto_add_replicas_plan",
>           "class":"solr.AutoAddReplicasPlanAction"},
>         {
>           "name":"execute_plan",
>           "class":"solr.ExecutePlanAction"}],
>       "enabled":true}},
>   "listeners":{".auto_add_replicas.system":{
>       "trigger":".auto_add_replicas",
>       "afterAction":[],
>       "stage":[
>         "STARTED",
>         "ABORTED",
>         "SUCCEEDED",
>         "FAILED",
>         "BEFORE_ACTION",
>         "AFTER_ACTION",
>         "IGNORED"],
>       "class":"org.apache.solr.cloud.autoscaling.SystemLogListener",
>       "beforeAction":[]}},
>   "properties":{}}
>

Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message