jackrabbit-oak-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Andrei Dulceanu (Jira)" <j...@apache.org>
Subject [jira] [Commented] (OAK-9095) MapRecord corruption when adding more than MapRecord.MAX_SIZE entries in branch record
Date Thu, 04 Jun 2020 08:18:00 GMT

    [ https://issues.apache.org/jira/browse/OAK-9095?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17125673#comment-17125673
] 

Andrei Dulceanu commented on OAK-9095:
--------------------------------------

Thanks for reviewing, [~thomasm]!

Fixed in trunk at r1878464.

> MapRecord corruption when adding more than MapRecord.MAX_SIZE entries in branch record
> --------------------------------------------------------------------------------------
>
>                 Key: OAK-9095
>                 URL: https://issues.apache.org/jira/browse/OAK-9095
>             Project: Jackrabbit Oak
>          Issue Type: Bug
>          Components: segment-tar
>    Affects Versions: 1.22.3, 1.8.22, 1.30.0
>            Reporter: Andrei Dulceanu
>            Assignee: Andrei Dulceanu
>            Priority: Major
>         Attachments: OAK-9095-02.patch, OAK-9095.patch
>
>
> It is now possible to write a {{MapRecord}} with a huge number of entries, going over
the maximum limit, {{MapRecord.MAX_SIZE}}, (i.e. 536.870.911 entries). This issue stems from
the fact that the number of entries is checked when writing a map leaf record [0], but not
when writing a map branch record [1]. When more than {{MapRecord.MAX_SIZE}} entries are written
in a branch record [2], the {{entrycCount}} overflows in the first bit of the level, essentially
rendering the entire HAMT structure corrupt, since the root branch record will be stored now
at level 1, instead of level 0, reporting an incorrect size as well (i.e. actual size - {{MapRecord.MAX_SIZE}}).
> Since this is a hard limit of the segment store and going above this number would mean
rewriting the internals of the HAMT structure currently in use, I propose the following mitigation:
> * add a size check for the branch record to not allow going over the limit
> * log a warning when the number of entries goes over 400.000.000
> * log an error when the number of entries goes over 500.000.000 and do not allow any
write operations on the node
> * allow further writes only if {{oak.segmentNodeStore.allowWritesOnHugeMapRecord}} system
property is present
> [0] https://github.com/apache/jackrabbit-oak/blob/1.22/oak-segment-tar/src/main/java/org/apache/jackrabbit/oak/segment/DefaultSegmentWriter.java#L284
> [1] https://github.com/apache/jackrabbit-oak/blob/1.22/oak-segment-tar/src/main/java/org/apache/jackrabbit/oak/segment/DefaultSegmentWriter.java#L291
> [2] https://github.com/apache/jackrabbit-oak/blob/1.22/oak-segment-tar/src/main/java/org/apache/jackrabbit/oak/segment/RecordWriters.java#L231



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

Mime
View raw message