jackrabbit-oak-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Andrei Dulceanu (Jira)" <j...@apache.org>
Subject [jira] [Commented] (OAK-9095) MapRecord corruption when adding more than MapRecord.MAX_SIZE entries in branch record
Date Mon, 01 Jun 2020 14:19:00 GMT

    [ https://issues.apache.org/jira/browse/OAK-9095?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17121049#comment-17121049

Andrei Dulceanu commented on OAK-9095:

[~thomasm], can you have a look at the patch attached?

I documented the limit, added a safety check when writing branch records (for completeness,
because with the latest logic the hard limit can't be surpassed now) and added the logging
you suggested. Please note that I removed the actual node name from the log messages, since
adding this info made unnecessarily complex the code.

> MapRecord corruption when adding more than MapRecord.MAX_SIZE entries in branch record
> --------------------------------------------------------------------------------------
>                 Key: OAK-9095
>                 URL: https://issues.apache.org/jira/browse/OAK-9095
>             Project: Jackrabbit Oak
>          Issue Type: Bug
>          Components: segment-tar
>    Affects Versions: 1.22.3, 1.8.22, 1.30.0
>            Reporter: Andrei Dulceanu
>            Assignee: Andrei Dulceanu
>            Priority: Major
>         Attachments: OAK-9095.patch
> It is now possible to write a {{MapRecord}} with a huge number of entries, going over
the maximum limit, {{MapRecord.MAX_SIZE}}, (i.e. 536.870.911 entries). This issue stems from
the fact that the number of entries is checked when writing a map leaf record [0], but not
when writing a map branch record [1]. When more than {{MapRecord.MAX_SIZE}} entries are written
in a branch record [2], the {{entrycCount}} overflows in the first bit of the level, essentially
rendering the entire HAMT structure corrupt, since the root branch record will be stored now
at level 1, instead of level 0, reporting an incorrect size as well (i.e. actual size - {{MapRecord.MAX_SIZE}}).
> Since this is a hard limit of the segment store and going above this number would mean
rewriting the internals of the HAMT structure currently in use, I propose the following mitigation:
> * add a size check for the branch record to not allow going over the limit
> * log a warning when the number of entries goes over 400.000.000
> * log an error when the number of entries goes over 500.000.000 and do not allow any
write operations on the node
> * allow further writes only if {{oak.segmentNodeStore.allowWritesOnHugeMapRecord}} system
property is present
> [0] https://github.com/apache/jackrabbit-oak/blob/1.22/oak-segment-tar/src/main/java/org/apache/jackrabbit/oak/segment/DefaultSegmentWriter.java#L284
> [1] https://github.com/apache/jackrabbit-oak/blob/1.22/oak-segment-tar/src/main/java/org/apache/jackrabbit/oak/segment/DefaultSegmentWriter.java#L291
> [2] https://github.com/apache/jackrabbit-oak/blob/1.22/oak-segment-tar/src/main/java/org/apache/jackrabbit/oak/segment/RecordWriters.java#L231

This message was sent by Atlassian Jira

View raw message