johnzon-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Hendrik Dev <hendrikde...@gmail.com>
Subject Re: Main purpose of BufferCache<T>
Date Sat, 05 Jul 2014 19:58:27 GMT
You're right with lock scaling but maybe we can minimize locking using
a (nearly lock free) Queue or something like this. The current
implementation was just a first draft to have alternative. Regarding
Threadlocals i dont know if they can cause problem in an application
server environment with heavy thread-pooling and different
classloaders. There can also be a risk of a memory leak when threads a
reused and live "forever".

My proposal would be to rethink if its really necessary to cache
allocations. If no than its easy, if still yes then lets discuss
implementation.

I will do some benchmarks as you proposed.

Thanks
Hendrik

On Sat, Jul 5, 2014 at 9:09 PM, Romain Manni-Bucau
<rmannibucau@gmail.com> wrote:
> Interesting.
>
> The not thread safe solution is just not possible for obvious reasons
> I guess. The synchronized one doesn't scale (why I used ThreadLocal
> and not a ReentrantLock).
>
> Benching I saw we need to cache buffers but it was before few other
> refactorings so maybe time to reevaluate it. Did you bench the current
> code with a BufferCache impl which just returns a new array instance
> and the current BufferCache (backed by a ThreadLocal)?
>
>
> Do you propose something else - not sure I fully got you to be honest :s?
>
>
>
> Romain Manni-Bucau
> Twitter: @rmannibucau
> Blog: http://rmannibucau.wordpress.com/
> LinkedIn: http://fr.linkedin.com/in/rmannibucau
> Github: https://github.com/rmannibucau
>
>
> 2014-07-05 20:55 GMT+02:00 Hendrik Dev <hendrikdev22@gmail.com>:
>> Hello,
>>
>> did some benchmarks here [1] too for caching array allocations. Its
>> indeed faster to cache and recycle them (from a relative point of
>> view) but in absolute figures allocating a 512k byte array on my
>> machine is about 100 microsec (= 0.0001 sec), a typical 8k bytebuffer
>> is created within 2 microsecs, which is IMHO not really a bottleneck.
>> In spite of that i created a alternative to BufferCache which doesn't
>> deal with threadlocals but deals with different buffer sizes. Please
>> see BufferRecycleCache.java [2] for details. Implementation is
>> unsynchronized by default but can be used synchronized by using
>> BufferRecycleCache.synchronizedBufferRecycleCache().
>>
>> To be honest i don't know what the best way to go. For the sake of
>> simplicity i would prefer to allocation cache at all.
>>
>> What do you think?
>>
>> Thanks
>> Hendrik
>>
>>
>> [1]
>> https://github.com/salyh/fleece_tmp/blob/benchmark-bufferrecyclecache2/fleece-core/avg_benchmark_jmh_result_f1_t4_w3_i4.txt
>> https://github.com/salyh/fleece_tmp/blob/benchmark-bufferrecyclecache2/fleece-core/src/test/java/org/apache/fleece/core/jmh/benchmark/BenchmarkBufferRecycleCache.java
>> https://github.com/salyh/fleece_tmp/blob/benchmark-bufferrecyclecache2/fleece-core/src/test/java/org/apache/fleece/core/jmh/benchmark/BenchmarkBufferRecycleCacheStateSynced.java
>>
>> [2]
>> https://github.com/salyh/fleece_tmp/blob/benchmark-bufferrecyclecache2/fleece-core/src/main/java/org/apache/fleece/core/BufferRecycleCache.java
>> https://github.com/salyh/fleece_tmp/blob/benchmark-bufferrecyclecache2/fleece-core/src/main/java/org/apache/fleece/core/ByteBufferRecycleCache.java
>> (Implementations for char[] and StingBuilder are analogue)
>>
>>
>>
>> On Fri, Jun 27, 2014 at 5:09 PM, Romain Manni-Bucau
>> <rmannibucau@gmail.com> wrote:
>>> Hi
>>>
>>> there are 2 BufferCache usage. Basically we cache *allocations* (not
>>> values) of StringBuilder and char[] (parser buffer). Name is maybe not the
>>> best one, BufferRecycle can be better. Fleece factories are thread safe,
>>> (then parser, generator etc are not).
>>>
>>> It really changes the performances but a better implementation (with no
>>> dependency) would work.
>>>
>>> Any proposal?
>>>
>>>
>>>
>>>
>>> Romain Manni-Bucau
>>> Twitter: @rmannibucau
>>> Blog: http://rmannibucau.wordpress.com/
>>> LinkedIn: http://fr.linkedin.com/in/rmannibucau
>>> Github: https://github.com/rmannibucau
>>>
>>>
>>> 2014-06-27 16:22 GMT+02:00 Hendrik Dev <hendrikdev22@gmail.com>:
>>>
>>>> Hello,
>>>>
>>>> what is the main purpose of org.apache.fleece.core.BufferCache<T>?
>>>>
>>>> Is it to be a real thread local cache (because we excpect that we have
>>>> many parallel threads running) or just a cache which use threadlocals
>>>> as a kind of an internal implementation (not expecting really parallel
>>>> threads)?
>>>>
>>>> The comment within this class is
>>>> "// alloc are expensive so using a very trivial buffer, we remove from
>>>> TL to say we use it and reset it when we are done (close)"
>>>> Which allocations are meant?  StringBuilder, char[], <T>, ...
>>>>
>>>> Is fleece supposed to be a threadsafe library? (Beyond wahts marked to
>>>> be thread-safe in the apispecs)
>>>>
>>>> Thanks
>>>> Hendrik (salyh)
>>>>
>>
>>
>>
>> --
>> Hendrik Saly (salyh, hendrikdev22)
>> @hendrikdev22
>> PGP: 0x22D7F6EC



-- 
Hendrik Saly (salyh, hendrikdev22)
@hendrikdev22
PGP: 0x22D7F6EC

Mime
View raw message