ignite-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Stanislav Lukyanov <stanlukya...@gmail.com>
Subject RE: ConcurrentLinkedHashMap works incorrectly after clear()
Date Tue, 24 Jul 2018 13:45:15 GMT
It seems that we currently use the CLHM primarily as a FIFO cache.
I see two ways around that.

First, we could implement such LRU/FIFO cache based on another, properly supported data structure
from j.u.c.
ConcurrentSkipListMap seems OK. I have a draft implementation of LruEvictionPolicy based on
it that passes functional tests, 
but I haven’t benchmarked it yet.

Second, Guava has a Cache API with a lot of configuration options that we could use (license
is Apache, should be OK). 

Stan

From: Nikolay Izhikov
Sent: 24 июля 2018 г. 16:27
To: dev@ignite.apache.org
Subject: Re: ConcurrentLinkedHashMap works incorrectly after clear()

Hello, Ilya.

May be we need to proceed with ticket [1] "Get rid of org.jsr166.ConcurrentLinkedHashMap"?

Especially, if this class is broken and buggy.

[1] https://issues.apache.org/jira/browse/IGNITE-7516

В Вт, 24/07/2018 в 16:20 +0300, Ilya Lantukh пишет:
> Thanks for revealing this issue!
> 
> I don't understand why should we disallow calling clear().
> 
> One way how it can be re-implemented is:
> 1. acquire write locks on all segments;
> 2. clear them;
> 3. reset size to 0;
> 4. release locks.
> 
> Another approach is to calculate inside
> ConcurrentLinkedHashMap.Segment.clear() how many entries you actually
> deleted and then call size.addAndGet(...).
> 
> In both cases you'll have to replace LongAdder with AtomicLong.
> 
> On Tue, Jul 24, 2018 at 4:03 PM, Ilya Kasnacheev <ilya.kasnacheev@gmail.com>
> wrote:
> 
> > Hello igniters!
> > 
> > So I was working on a fix for
> > https://issues.apache.org/jira/browse/IGNITE-9056
> > The reason for test flakiness turned out our ConcurrentLinkedHashMap (and
> > its tautological cousin GridBoundedConcurrentLinkedHashMap) is broken :(
> > 
> > When you do clear(). its size counter is not updated. So sizex() will
> > return the old size after clear, and if there's maxCnt set, after several
> > clear()s it will immediately evict entries after they are inserted,
> > maintaining map size at 0.
> > 
> > This is scary since indexing internals make intense use of
> > ConcurrentLinkedHashMaps.
> > 
> > My suggestion for this fix is to avoid ever calling clear(), making it
> > throw UnsupportedOperationException and recreating/replacing map instead of
> > clear()ing it. Unless somebody is going to stand up and fix
> > ConcurrentLinkedHashMap.clear() properly. Frankly speaking I'm afraid of
> > touching this code in any non-trivial way.
> > 
> > --
> > Ilya Kasnacheev
> > 
> 
> 
> 


Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message