jackrabbit-oak-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Michael Dürig (JIRA) <j...@apache.org>
Subject [jira] [Commented] (OAK-3330) FileStore lock contention with concurrent writers
Date Wed, 02 Sep 2015 11:42:45 GMT

    [ https://issues.apache.org/jira/browse/OAK-3330?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14727206#comment-14727206

Michael Dürig commented on OAK-3330:

Caching all segments of the current tar writer improves the situation quite a bit. With below
configuration of {{SegmentCompactionIT}} I was able to write 2.5 GB of content vs. 5.8 GB
of content with the cache. In the former case the {{FileStore}} monitor was fully contented
while in the latter no lock was contended more then 4%. 

    private volatile int lockWaitTime = 60;
    private volatile int maxReaders = 10;
    private volatile int maxWriters = 16;
    private volatile long maxStoreSize = 120000000000L;
    private volatile int maxBlobSize = 10000;
    private volatile int maxStringSize = 10000;
    private volatile int maxReferences = 10;
    private volatile int maxWriteOps = 10000;
    private volatile int maxNodeCount = 1000;
    private volatile int maxPropertyCount = 1000;
    private volatile int nodeRemoveRatio = 10;
    private volatile int propertyRemoveRatio = 10;
    private volatile int nodeAddRatio = 40;
    private volatile int addStringRatio = 20;
    private volatile int addBinaryRatio = 0;
    private volatile int compactionInterval = 120;

> FileStore lock contention with concurrent writers
> -------------------------------------------------
>                 Key: OAK-3330
>                 URL: https://issues.apache.org/jira/browse/OAK-3330
>             Project: Jackrabbit Oak
>          Issue Type: Improvement
>          Components: segmentmk
>            Reporter: Michael Dürig
>            Assignee: Michael Dürig
>              Labels: compaction
> Concurrently writing to the file store can lead to a sever lock contention in {{FileStore#readSegment}}.
That method searches the current {{TarWriter}} instance for the segment once it could not
be found in any of the {{TarReader}} instances. This is the point where synchronizes on the
{{FileStore}} instance, which leads to  the contention. 
> The effect is only observable once the segment cache becomes full and reads actually
need to go to the file store. Thus a possible improvement could be to pin segments from the
current tar writer to the cache. Alternatively we could try to ease locking by employing read/write
locks where possible. 

This message was sent by Atlassian JIRA

View raw message