jackrabbit-oak-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Michael Dürig (JIRA) <j...@apache.org>
Subject [jira] [Updated] (OAK-3330) FileStore lock contention with concurrent writers
Date Wed, 02 Sep 2015 11:46:45 GMT

     [ https://issues.apache.org/jira/browse/OAK-3330?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]

Michael Dürig updated OAK-3330:
-------------------------------
    Attachment: OAK-3330.patch

Patch with the cache I used for above test. This is very raw still. It is carefully crafted
in a way not to duplicate the underlying data of segments even though the segment cache in
{{SegmentTracker}} and the one in {{FileStore}} might contain the same (equals) segment but
a different instance (!=).

> FileStore lock contention with concurrent writers
> -------------------------------------------------
>
>                 Key: OAK-3330
>                 URL: https://issues.apache.org/jira/browse/OAK-3330
>             Project: Jackrabbit Oak
>          Issue Type: Improvement
>          Components: segmentmk
>            Reporter: Michael Dürig
>            Assignee: Michael Dürig
>              Labels: compaction
>         Attachments: OAK-3330.patch
>
>
> Concurrently writing to the file store can lead to a sever lock contention in {{FileStore#readSegment}}.
That method searches the current {{TarWriter}} instance for the segment once it could not
be found in any of the {{TarReader}} instances. This is the point where synchronizes on the
{{FileStore}} instance, which leads to  the contention. 
> The effect is only observable once the segment cache becomes full and reads actually
need to go to the file store. Thus a possible improvement could be to pin segments from the
current tar writer to the cache. Alternatively we could try to ease locking by employing read/write
locks where possible. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Mime
View raw message