jackrabbit-oak-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Andrei Dulceanu (Jira)" <j...@apache.org>
Subject [jira] [Commented] (OAK-9095) MapRecord corruption when adding more than MapRecord.MAX_SIZE entries in branch record
Date Wed, 03 Jun 2020 13:27:00 GMT

    [ https://issues.apache.org/jira/browse/OAK-9095?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17124950#comment-17124950
] 

Andrei Dulceanu commented on OAK-9095:
--------------------------------------

Thanks for the feedback, [~thomasm]!
{quote}The main issue is, when the hard limit is reached, new entries are discarded, but writing
is not prevented (no exception is thrown). I would throw an exception instead, so that writes
fail and the transaction is not committed.
{quote}
I changed this accordingly and now an exception is thrown.
{quote}There is no unit test. I guess it's hard to write one, but it's fairly important to
ensure this works as expected... So maybe by injecting a very high number (near the different
limits) into the data structure?
{quote}
It wasn't that straight-forward to come up with the unit tests, given the fact that using
the real data structure and inserting millions of entries was painfully slow. That's why I
preferred to mock the call to {{size()}} and then check the behaviour when attempting to insert
a single new entry.

I included the changes above, as well the others you suggested in [^OAK-9095-02.patch]. It
would be great to have your thoughts on it! 

> MapRecord corruption when adding more than MapRecord.MAX_SIZE entries in branch record
> --------------------------------------------------------------------------------------
>
>                 Key: OAK-9095
>                 URL: https://issues.apache.org/jira/browse/OAK-9095
>             Project: Jackrabbit Oak
>          Issue Type: Bug
>          Components: segment-tar
>    Affects Versions: 1.22.3, 1.8.22, 1.30.0
>            Reporter: Andrei Dulceanu
>            Assignee: Andrei Dulceanu
>            Priority: Major
>         Attachments: OAK-9095-02.patch, OAK-9095.patch
>
>
> It is now possible to write a {{MapRecord}} with a huge number of entries, going over
the maximum limit, {{MapRecord.MAX_SIZE}}, (i.e. 536.870.911 entries). This issue stems from
the fact that the number of entries is checked when writing a map leaf record [0], but not
when writing a map branch record [1]. When more than {{MapRecord.MAX_SIZE}} entries are written
in a branch record [2], the {{entrycCount}} overflows in the first bit of the level, essentially
rendering the entire HAMT structure corrupt, since the root branch record will be stored now
at level 1, instead of level 0, reporting an incorrect size as well (i.e. actual size - {{MapRecord.MAX_SIZE}}).
> Since this is a hard limit of the segment store and going above this number would mean
rewriting the internals of the HAMT structure currently in use, I propose the following mitigation:
> * add a size check for the branch record to not allow going over the limit
> * log a warning when the number of entries goes over 400.000.000
> * log an error when the number of entries goes over 500.000.000 and do not allow any
write operations on the node
> * allow further writes only if {{oak.segmentNodeStore.allowWritesOnHugeMapRecord}} system
property is present
> [0] https://github.com/apache/jackrabbit-oak/blob/1.22/oak-segment-tar/src/main/java/org/apache/jackrabbit/oak/segment/DefaultSegmentWriter.java#L284
> [1] https://github.com/apache/jackrabbit-oak/blob/1.22/oak-segment-tar/src/main/java/org/apache/jackrabbit/oak/segment/DefaultSegmentWriter.java#L291
> [2] https://github.com/apache/jackrabbit-oak/blob/1.22/oak-segment-tar/src/main/java/org/apache/jackrabbit/oak/segment/RecordWriters.java#L231



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

Mime
View raw message