lucene-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Andrzej Bialecki (JIRA)" <>
Subject [jira] [Commented] (SOLR-13579) Create resource management API
Date Tue, 30 Jul 2019 14:58:00 GMT


Andrzej Bialecki  commented on SOLR-13579:

On the use cases:

bq. CacheManagerPlugin would only ever reduce the maxRamMB setting of some caches at run time
Again, the current implementation of {{CacheManagerPlugin}} is a simplistic draft.

Ultimately the controlled value of {{maxRamMB}} would be tied proportionally to two main factors:
* the {{hitratio}} metric (i.e. caches with low hit rate don't need as much RAM so their {{maxRamMB}}
would be trimmed down). This is an optimization of resource usage.
* and the total {{ramBytesUsed}} across all cores would be used as a hard limit, proportionally
applied to all caches' {{maxRamMB}}, overriding the above optimization if necessary. This
is a hard control limit, which indeed is related to the current number of cores.

Initial value of {{maxRamMB}} would still come from the config, as it does today, but then
during runtime it would be modified both up and down from that value depending on the situation.

bq. users who want to use these pools need to change the individual cache's configured maxRamMB
to be much higher then they are today. (potentially to the same value as the maxRamMB of the
I think it would work the other way around - users can specify whatever they want, but if
the admin sets a total {{maxRamMB}} to a lower value than the aggregate value that users requested,
their requests will be proportionally scaled down (see also above for a finer-grained optimization
adjustment, not just the hard limit).
So in reality the amount of RAM each core and each cache would get would be determined as
* initial value would be set from the config's {{maxRamMB}}, unless it would already hit the
global limit
* this value would be quickly trimmed down based on the {{hitratio}}, and eventually scaled
up as the {{hitratio}} increases. Some other metric could be used here, too, to make this
scale down/up process more efficient.
* if a bunch of other cores were suddenly allocated to the same node it's likely that the
aggregate {{ramBytesUsed}} would hit the global ceiling and the plugin would start trimming
down {{maxRamMB}} of each cache in each core (possibly using some weighted scheme instead
of purely proportional). 
* if the number of cores were to decrease so that their aggregate {{ramBytesUsed}} would fall
below a percentage of the hard limit, say 80%, the plugin could proportionally increase the
{{maxRamMB}} so that they equal to eg. 80% of the hard limit.

bq.  how/when can/should a CacheManagerPlugin assume/recognize that the memory pressure has
Using the {{ramBytesUsed}} metric for the hard limit, and the {{hitratio}} metric for optimization.

If {{hitratio}} is high then we need as much RAM as possible to expand the cache, until we
either hit the core's limit, or the global limit, OR the {{hitratio}} falls below a threshold.
If {{hitratio}} falls below a threshold then we know the cache contains mostly useless items
and we can trim down its {{maxRamMB}}, which will lead to evictions, which in turn will lead
to the increased {{hitratio}}.

> Create resource management API
> ------------------------------
>                 Key: SOLR-13579
>                 URL:
>             Project: Solr
>          Issue Type: New Feature
>            Reporter: Andrzej Bialecki 
>            Assignee: Andrzej Bialecki 
>            Priority: Major
>         Attachments: SOLR-13579.patch, SOLR-13579.patch, SOLR-13579.patch, SOLR-13579.patch,
SOLR-13579.patch, SOLR-13579.patch
> Resource management framework API supporting the goals outlined in SOLR-13578.

This message was sent by Atlassian JIRA

To unsubscribe, e-mail:
For additional commands, e-mail:

View raw message