jackrabbit-oak-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Alex Parvulescu (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (OAK-4966) Re-introduce a blocker for compaction based on available heap
Date Wed, 26 Oct 2016 15:11:58 GMT

    [ https://issues.apache.org/jira/browse/OAK-4966?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15608741#comment-15608741
] 

Alex Parvulescu commented on OAK-4966:
--------------------------------------

bq. I agree on the upfront vs. continuous checking of available heap. Let's start simple and
improve when required. Let's keep an eye on this part during our testing.
that's not totally correct though. if we only check upfront, before a first compaction node
deduplication cache will be empty, so given an instance that has {{1GB}} or {{2GB}} of available
heap, caches will spike to > {{2GB}} crashing the instance, so if we go with a % of available
heap, it needs to be a continuous check.
my proposal was to provide an estimate (even if higher that real) to stop early if there's
not enough heap based on max size of all the existing caches.

I'll provide a simpler version that only relies on polling available heap and comparing with
a configurable threshold (10%?) stopping compaction if needed.

> Re-introduce a blocker for compaction based on available heap
> -------------------------------------------------------------
>
>                 Key: OAK-4966
>                 URL: https://issues.apache.org/jira/browse/OAK-4966
>             Project: Jackrabbit Oak
>          Issue Type: Improvement
>          Components: segment-tar
>            Reporter: Alex Parvulescu
>            Assignee: Alex Parvulescu
>             Fix For: 1.6, 1.5.13
>
>         Attachments: OAK-4966.patch
>
>
> As seen in a local test, running compaction on a tight heap can lead to OOMEs. There
used to be a best effort barrier against this situation 'not enough heap for compaction',
but we removed it with the compaction maps.
> I think it makes sense to add it again based on the max size of some of the caches: segment
cache {{256MB}} by default [0] and some writer caches which can go up to {{2GB}} all combined
[1] and probably others I missed.
> [0] https://github.com/apache/jackrabbit-oak/blob/trunk/oak-segment-tar/src/main/java/org/apache/jackrabbit/oak/segment/SegmentCache.java#L48
> [1] https://github.com/apache/jackrabbit-oak/blob/trunk/oak-segment-tar/src/main/java/org/apache/jackrabbit/oak/segment/WriterCacheManager.java#L50



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Mime
View raw message