jackrabbit-oak-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Michael Dürig (JIRA) <j...@apache.org>
Subject [jira] [Resolved] (OAK-4277) Finalise de-duplication caches
Date Wed, 27 Jul 2016 10:44:21 GMT

     [ https://issues.apache.org/jira/browse/OAK-4277?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel

Michael Dürig resolved OAK-4277.
       Resolution: Fixed
    Fix Version/s:     (was: 1.6)

Fixed at http://svn.apache.org/viewvc?rev=1754240&view=rev

> Finalise de-duplication caches
> ------------------------------
>                 Key: OAK-4277
>                 URL: https://issues.apache.org/jira/browse/OAK-4277
>             Project: Jackrabbit Oak
>          Issue Type: Task
>          Components: segment-tar
>            Reporter: Michael Dürig
>            Assignee: Michael Dürig
>              Labels: caching, compaction, gc, monitoring
>             Fix For: Segment Tar 0.0.8
> OAK-3348 "promoted" the record cache to a de-duplication cache, which is heavily relied
upon during compaction. Now also node states go through this cache, which can seen as one
concern of the former compaction map (the other being equality). 
> The current implementation of these caches is quite simple and served its purpose for
a POC for getting rid of the "back references" (OAK-3348). Before we are ready for a release
we need to finalise a couple of things though:
> * Implement cache monitoring and management
> * Make cache parameters now hard coded configurable
> * Implement proper UTs 
> * Add proper Javadoc
> * Fine tune eviction logic and move it into the caches themselves (instead of relying
on the client to evict items pro-actively)
> * Fine tune caching strategies: For the node state cache the cost of the item is determined
just by its position in the tree. We might want to take further things into account (e.g.
number of child nodes). Also we might want to implement pinning so e.g. checkpoints would
never be evicted. 
> * Finally we need to decide who should own this cache. It currently lives with the {{SegmentWriter}}.
However this is IMO not the correct location as during compaction there is dedicated segment
writer whose cache need to be shared with the primary's segment writer upon successful completion.

This message was sent by Atlassian JIRA

View raw message