lucene-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Ben Manes (JIRA)" <>
Subject [jira] [Commented] (SOLR-10141) Caffeine cache causes BlockCache corruption
Date Sat, 18 Feb 2017 04:42:44 GMT


Ben Manes commented on SOLR-10141:

Thanks!!! I think I found the bug. It now passes your test case.

The problem was due to put() stampeding over the value during the eviction. The [eviction
performed the following:
# Read the key, value, etc
# Conditionally removed in a computeIfPresent() block
   - resurrected if a race occurred (e.g. was thought expired, but newly accessed)
# Mark the entry as "dead" (using a synchronized (entry) block)
# Notify the listener

This failed because [putFast|]
can perform its update outside of a hash table lock (e.g. a computation). It synchronizes
on the entry to update, checking first if it was still alive. This resulted in a race where
the entry was removed from the hash table, the value updated, and entry marked as dead. When
the listener was notified, it received the wrong value.

The solution I have now is to expand the synchronized block on eviction. This passes your
test and should be cheap. I'd like to review it a little more and incorporate your test into
my suite.

This is an excellent find. I've stared at the code many times and the race seems obvious in

> Caffeine cache causes BlockCache corruption 
> --------------------------------------------
>                 Key: SOLR-10141
>                 URL:
>             Project: Solr
>          Issue Type: Bug
>      Security Level: Public(Default Security Level. Issues are Public) 
>            Reporter: Yonik Seeley
>         Attachments: SOLR-10141.patch,
> After fixing the race conditions in the BlockCache itself (SOLR-10121), the concurrency
test passes with the previous implementation using ConcurrentLinkedHashMap and fail with Caffeine.

This message was sent by Atlassian JIRA

To unsubscribe, e-mail:
For additional commands, e-mail:

View raw message