ignite-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Dmitriy Pavlov <dpavlov....@gmail.com>
Subject Re: Page replacement policy improvements (when persistent is enabled)
Date Fri, 03 Aug 2018 09:07:48 GMT
Hi Vladimir,

I really feel that page replacement approach can be improved. Currently I
don't think that page nature will give us much, because usage frequency can
be independent to page type.

I also noticed a couple of tickets were done by Ilya Kasnacheev and Eugeniy
Stanilovskly, which were more or less related to page replacement
improvements. I hope guys will step in.

Could we consider somehow involve index page level in B+ Tree? This could
be helpful. Tree root should be never replaced.

I totally agree that some metric to monitor and understand how page
replacement works in wild, would benefit us a lot.

Dmitriy Pavlov

пт, 3 авг. 2018 г. в 10:19, Vladimir Ozerov <vozerov@gridgain.com>:

> Igniters,
> I heard some complaints about our page replacement algorithm that index
> pages could be evicted from memory too often. I reviewed our current
> implementation and looks like we have choosen very simple approach with
> eviction of random pages, without taking in count their nature (data vs
> index) and typical usage patterns (such as scans).
> With our heap-based architecture typical SQL query is executed as follows:
> 1) Read non-leaf index pages, then in loop:
> 2.1) Read 1 leaf index page
> 2.2) Read several hunderds data pages
> This way index pages on average has smaller timestamp than data pages and
> has good probabilty of being evicted.
> Another major problem is scan resistance, which doesn't seem to be covered
> anyhow.
> My question is - what was the reason of choosing random-pseudo-LRU
> algorithm instead of commonly used variation of *real* LRU (such as LRU-K,
> 2Q, etc)? Did we perform any evaluation of it's effectiveness?
> I am thinking of creating new IEP to evaluate and possibly improve our page
> replacement as follows:
> 1) Implement metrics to count page cache hit/miss by page type [1]
> 2) Implement *heat map* which can optionally be enabled to track page
> hits/misses per page or per specific object (cache, index)
> 3) Run heat map on typical workloads (lookups, scans, joins, etc) to get a
> baseline
> 4) Prototype several LRU-based implementation and see if they gave any
> benefit. It makes sense to start with minor improvements to current
> algorithm (e.g. favor index pages over data pages, play with sample size,
> replace timestamps with read counters, etc).
> In any case the first two action items would be good addition to product
> monitoring.
> What do you think?
> [1] https://issues.apache.org/jira/browse/IGNITE-8580

  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message