jackrabbit-oak-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Thomas Mueller (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (OAK-3007) SegmentStore cache does not take "string" map into account
Date Tue, 07 Jul 2015 07:16:05 GMT

    [ https://issues.apache.org/jira/browse/OAK-3007?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14616304#comment-14616304
] 

Thomas Mueller commented on OAK-3007:
-------------------------------------

> Thomas Mueller, could you also add a unit test for StringCache?

Sure.

> I believe that the root cause for the compaction case filling up the cache is is OAK-3075.
but this patch should be applied nonetheless.

As far as I understand, OAK-3075 is not the reason, but yes I understand that some other issue
caused that this many strings are loaded. It would be nice to find the root cause. In order
to find the root cause, the patch here (OAK-3007) could be applied and the compaction test
could be re-run. That should no longer result in out-of-memory in SegmentStore cache, but
in slow compaction, which in turn can be analyzed using a profiler. Or in yet another out-of-memory
(but in a different place), which in turn can be analyzed... So the current patch of OAK-3007
should help to understand the _real_ problem.

> but this patch should be applied nonetheless.

Yes, I think applying this patch eventually to trunk (after writing more unit tests) would
be good. Backporting it might not be all that urgent I guess.



> SegmentStore cache does not take "string" map into account
> ----------------------------------------------------------
>
>                 Key: OAK-3007
>                 URL: https://issues.apache.org/jira/browse/OAK-3007
>             Project: Jackrabbit Oak
>          Issue Type: Bug
>          Components: segmentmk
>            Reporter: Thomas Mueller
>             Fix For: 1.3.3
>
>         Attachments: OAK-3007-2.patch, OAK-3007.patch
>
>
> The SegmentStore cache size calculation ignores the size of the field Segment.string
(a concurrent hash map). It looks like a regular segment in a memory mapped file has the size
1024, no matter how many strings are loaded in memory. This can lead to out of memory. There
seems to be no way to limit (configure) the amount of memory used by strings. In one example,
100'000 segments are loaded in memory, and 5 GB are used for Strings in that map.
> We need a way to configure the amount of memory used for that. This seems to be basically
a cache. OAK-2688 does this, but it would be better to have one cache with a configurable
size limit.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Mime
View raw message