jackrabbit-oak-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Stefan Egli (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (OAK-4581) Persistent local journal for more reliable event generation
Date Fri, 02 Sep 2016 09:00:33 GMT

    [ https://issues.apache.org/jira/browse/OAK-4581?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15457991#comment-15457991
] 

Stefan Egli commented on OAK-4581:
----------------------------------

[~mduerig], thanks for the comments!
bq. Do we know that we need to go off-heap with that queue?
Agreed, the entries are normally cheap, but generally speaking it's the open map mechanism
that can make them unbounded, thus larger. But even assuming they are cheap on average, you
can have a situation where you have such a high traffic burst that you're overwhelming even
a highly optimized listener logic. In which case the queues grow and you'll get an OutOfMemoryError.
The benefit of persisting the queues (when they become big that is) is for such rare special
cases only. And you'd have to construct such a rare case where basically you'd force an OutOfMemoryError
versus with this patch not.
bq. I expect unbounded queues to have adverse effects on the hotness of the various caches.
Right. There's some ideas left that aren't much mentioned or fleshed out yet: on the one hand
we should do _pre-filtering_ of events, such that only events end up on queues that are indeed
meant to go to a listener. The listener shouldn't have to filter afterwards anymore. Currently
we're putting events on each listener's queues and only filter after hte fact. If queues become
large, then this very fact becomes an issue exactly due to cache inefficiencies in this case.
Ie a lot of computation is then lost purely to figure out if a listener needs an entry or
not (as it can't find it in the cache anymore). So with prefiltering this would not be an
issue anymore. 
What would be left though is the cache-inefficiency for actual events that listeners _want_.
There we might optimize by including a bit more info into what we persist, perhaps the actual
diff if it's not too big etc etc.
bq. Any thoughts on how unbounded queues should interact with gc?
One approach that we currently target is to checkpoint the oldest entry, such that we prevent
gc from removing it (assuming checkpoints are respected).
bq. However I dislike having to cope with serialising the open CommitInfo class. At least
we should rely on a general purpose library here. 
Open for alternatives for sure! I was assuming that we need to store the CommitInfo obj, as
that's what persisting is mostly about. And if something in there is not serializable, then
we're lost and have to skip it (we can warn loudly though). What exactly were you thinking
of as alternatives?
bq. I don't think PersistedBlockingQueue should use a node store as its back-end.
I'm probably not getting the entirety of this point. I guess one argument to reuse the tarMk
is that it's something we have and know we can use it - we can surely use something else,
for sure. Regarding GC the idea was to _not_ rely on GCing that observation-tarMk but to use
generations of tarMk similar to how that's done in persistent cache: so we'd throw away a
whole tarMk set once we switched to a new one.

> Persistent local journal for more reliable event generation
> -----------------------------------------------------------
>
>                 Key: OAK-4581
>                 URL: https://issues.apache.org/jira/browse/OAK-4581
>             Project: Jackrabbit Oak
>          Issue Type: New Feature
>          Components: core
>            Reporter: Chetan Mehrotra
>            Assignee: Stefan Egli
>              Labels: observation
>             Fix For: 1.6
>
>         Attachments: OAK-4581.v0.patch
>
>
> As discussed in OAK-2683 "hitting the observation queue limit" has multiple drawbacks.
Quite a bit of work is done to make diff generation faster. However there are still chances
of event queue getting filled up. 
> This issue is meant to implement a persistent event journal. Idea here being
> # NodeStore would push the diff into a persistent store via a synchronous observer
> # Observors which are meant to handle such events in async way (by virtue of being wrapped
in BackgroundObserver) would instead pull the events from this persisted journal
> h3. A - What is persisted
> h4. 1 - Serialized Root States and CommitInfo
> In this approach we just persist the root states in serialized form. 
> * DocumentNodeStore - This means storing the root revision vector
> * SegmentNodeStore - {color:red}Q1 - What does serialized form of SegmentNodeStore root
state looks like{color} - Possible the RecordId of "root" state
> Note that with OAK-4528 DocumentNodeStore can rely on persisted remote journal to determine
the affected paths. Which reduces the need for persisting complete diff locally.
> Event generation logic would then "deserialize" the persisted root states and then generate
the diff as currently done via NodeState comparison
> h4. 2 - Serialized commit diff and CommitInfo
> In this approach we can save the diff in JSOP form. The diff only contains information
about affected path. Similar to what is current being stored in DocumentNodeStore journal
> h4. CommitInfo
> The commit info would also need to be serialized. So it needs to be ensure whatever is
stored there can be serialized or re calculated
> h3. B - How it is persisted
> h4. 1 - Use a secondary segment NodeStore
> OAK-4180 makes use of SegmentNodeStore as a secondary store for caching. [~mreutegg]
suggested that for persisted local journal we can also utilize a SegmentNodeStore instance.
Care needs to be taken for compaction. Either via generation approach or relying on online
compaction
> h4. 2- Make use of write ahead log implementations
> [~ianeboston] suggested that we can make use of some write ahead log implementation like
[1], [2] or [3]
> h3. C - How changes get pulled
> Some points to consider for event generation logic
> # Would need a way to keep pointers to journal entry on per listener basis. This would
allow each Listener to "pull" content changes and generate diff as per its speed and keeping
in memory overhead low
> # The journal should survive restarts
> [1] http://www.mapdb.org/javadoc/latest/mapdb/org/mapdb/WriteAheadLog.html
> [2] https://github.com/apache/activemq/tree/master/activemq-kahadb-store/src/main/java/org/apache/activemq/store/kahadb/disk/journal
> [3] https://github.com/elastic/elasticsearch/tree/master/core/src/main/java/org/elasticsearch/index/translog



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Mime
View raw message