lucene-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Yonik Seeley (JIRA)" <>
Subject [jira] [Commented] (SOLR-10141) Caffeine cache causes BlockCache corruption
Date Sat, 18 Feb 2017 23:02:44 GMT


Yonik Seeley commented on SOLR-10141:

The size issue is only an issue for the BlockCache specifically (not for any other Solr caches).
Actually, the way the BlockCache is written, we are guaranteed to never have more than maxEntries...
writers have to wait for an open slot (which opens up once the removal listener is called).
 The writer spins a bit trying to find an open slot and fails if it can't.  Doing extra work
via cache.cleanUp() if we don't see an empty slot is definitely better than failing to cache
the entry.

I imagine the issue existed when CLHM was used as well.  The metric of store failures isn't
currently tracked, and it only leads to a lower cache hit rate.  I plan on starting to track
it, and then to see how often it happens when we're actually caching real HDFS blocks.  That's
a separate issue though.

> Caffeine cache causes BlockCache corruption 
> --------------------------------------------
>                 Key: SOLR-10141
>                 URL:
>             Project: Solr
>          Issue Type: Bug
>      Security Level: Public(Default Security Level. Issues are Public) 
>            Reporter: Yonik Seeley
>         Attachments: SOLR-10141.patch,
> After fixing the race conditions in the BlockCache itself (SOLR-10121), the concurrency
test passes with the previous implementation using ConcurrentLinkedHashMap and fail with Caffeine.

This message was sent by Atlassian JIRA

To unsubscribe, e-mail:
For additional commands, e-mail:

View raw message