johnzon-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Romain Manni-Bucau <rmannibu...@gmail.com>
Subject Re: Main purpose of BufferCache<T>
Date Fri, 11 Jul 2014 12:03:13 GMT
Hi

not sure we still use byte[] now so maybe not yet needed (need to be checked).

StringBuilder can be cached too but they shouldn't be used that much
now so maybe event the buferr cache for stringbuilder can be removed.

Not sure I fully get the idea of caching stack variables. I'm tempted
to say if we need to cache stack varaible it means we can cache the
method returned value. And here it is acceptable (bench to validate
it) to cache keys which shouldn't change that much but not values
which can just make memory exploding. If we want to cache values we
need to use a sized map en proove it enhances performances in real
life (and not in json benchmark case which is really biased regarding
this)



Romain Manni-Bucau
Twitter: @rmannibucau
Blog: http://rmannibucau.wordpress.com/
LinkedIn: http://fr.linkedin.com/in/rmannibucau
Github: https://github.com/rmannibucau


2014-07-11 13:43 GMT+02:00 Hendrik Dev <hendrikdev22@gmail.com>:
> Have seen your new implementation (BufferStrategy) for char[].
> Do we want to support also StringBuilder and probably byte[] for
> allocation caching?
>
> Do we want to cache only instance level variables (heap) or also
> arrays/stringbuilder which have a local scope (stack).
>
> Thanks
> Hendrik
>
> On Sun, Jul 6, 2014 at 10:08 AM, Romain Manni-Bucau
> <rmannibucau@gmail.com> wrote:
>> Ok
>>
>> I'm pretty sure we need it but we can allow to pass properties to parser
>> factory to change the impl. In TomEE thread local are fine since fleece
>> will be in the container, in tomcat it will be in a webapp and create leaks.
>> Default can be no cache and in tomee we'll switch it to ThreadLocal by
>> config.
>>
>> Waiting your results to start solving it ;)
>> Le 5 juil. 2014 20:58, "Hendrik Dev" <hendrikdev22@gmail.com> a écrit :
>>
>>> You're right with lock scaling but maybe we can minimize locking using
>>> a (nearly lock free) Queue or something like this. The current
>>> implementation was just a first draft to have alternative. Regarding
>>> Threadlocals i dont know if they can cause problem in an application
>>> server environment with heavy thread-pooling and different
>>> classloaders. There can also be a risk of a memory leak when threads a
>>> reused and live "forever".
>>>
>>> My proposal would be to rethink if its really necessary to cache
>>> allocations. If no than its easy, if still yes then lets discuss
>>> implementation.
>>>
>>> I will do some benchmarks as you proposed.
>>>
>>> Thanks
>>> Hendrik
>>>
>>> On Sat, Jul 5, 2014 at 9:09 PM, Romain Manni-Bucau
>>> <rmannibucau@gmail.com> wrote:
>>> > Interesting.
>>> >
>>> > The not thread safe solution is just not possible for obvious reasons
>>> > I guess. The synchronized one doesn't scale (why I used ThreadLocal
>>> > and not a ReentrantLock).
>>> >
>>> > Benching I saw we need to cache buffers but it was before few other
>>> > refactorings so maybe time to reevaluate it. Did you bench the current
>>> > code with a BufferCache impl which just returns a new array instance
>>> > and the current BufferCache (backed by a ThreadLocal)?
>>> >
>>> >
>>> > Do you propose something else - not sure I fully got you to be honest :s?
>>> >
>>> >
>>> >
>>> > Romain Manni-Bucau
>>> > Twitter: @rmannibucau
>>> > Blog: http://rmannibucau.wordpress.com/
>>> > LinkedIn: http://fr.linkedin.com/in/rmannibucau
>>> > Github: https://github.com/rmannibucau
>>> >
>>> >
>>> > 2014-07-05 20:55 GMT+02:00 Hendrik Dev <hendrikdev22@gmail.com>:
>>> >> Hello,
>>> >>
>>> >> did some benchmarks here [1] too for caching array allocations. Its
>>> >> indeed faster to cache and recycle them (from a relative point of
>>> >> view) but in absolute figures allocating a 512k byte array on my
>>> >> machine is about 100 microsec (= 0.0001 sec), a typical 8k bytebuffer
>>> >> is created within 2 microsecs, which is IMHO not really a bottleneck.
>>> >> In spite of that i created a alternative to BufferCache which doesn't
>>> >> deal with threadlocals but deals with different buffer sizes. Please
>>> >> see BufferRecycleCache.java [2] for details. Implementation is
>>> >> unsynchronized by default but can be used synchronized by using
>>> >> BufferRecycleCache.synchronizedBufferRecycleCache().
>>> >>
>>> >> To be honest i don't know what the best way to go. For the sake of
>>> >> simplicity i would prefer to allocation cache at all.
>>> >>
>>> >> What do you think?
>>> >>
>>> >> Thanks
>>> >> Hendrik
>>> >>
>>> >>
>>> >> [1]
>>> >>
>>> https://github.com/salyh/fleece_tmp/blob/benchmark-bufferrecyclecache2/fleece-core/avg_benchmark_jmh_result_f1_t4_w3_i4.txt
>>> >>
>>> https://github.com/salyh/fleece_tmp/blob/benchmark-bufferrecyclecache2/fleece-core/src/test/java/org/apache/fleece/core/jmh/benchmark/BenchmarkBufferRecycleCache.java
>>> >>
>>> https://github.com/salyh/fleece_tmp/blob/benchmark-bufferrecyclecache2/fleece-core/src/test/java/org/apache/fleece/core/jmh/benchmark/BenchmarkBufferRecycleCacheStateSynced.java
>>> >>
>>> >> [2]
>>> >>
>>> https://github.com/salyh/fleece_tmp/blob/benchmark-bufferrecyclecache2/fleece-core/src/main/java/org/apache/fleece/core/BufferRecycleCache.java
>>> >>
>>> https://github.com/salyh/fleece_tmp/blob/benchmark-bufferrecyclecache2/fleece-core/src/main/java/org/apache/fleece/core/ByteBufferRecycleCache.java
>>> >> (Implementations for char[] and StingBuilder are analogue)
>>> >>
>>> >>
>>> >>
>>> >> On Fri, Jun 27, 2014 at 5:09 PM, Romain Manni-Bucau
>>> >> <rmannibucau@gmail.com> wrote:
>>> >>> Hi
>>> >>>
>>> >>> there are 2 BufferCache usage. Basically we cache *allocations*
(not
>>> >>> values) of StringBuilder and char[] (parser buffer). Name is maybe
not
>>> the
>>> >>> best one, BufferRecycle can be better. Fleece factories are thread
>>> safe,
>>> >>> (then parser, generator etc are not).
>>> >>>
>>> >>> It really changes the performances but a better implementation (with
no
>>> >>> dependency) would work.
>>> >>>
>>> >>> Any proposal?
>>> >>>
>>> >>>
>>> >>>
>>> >>>
>>> >>> Romain Manni-Bucau
>>> >>> Twitter: @rmannibucau
>>> >>> Blog: http://rmannibucau.wordpress.com/
>>> >>> LinkedIn: http://fr.linkedin.com/in/rmannibucau
>>> >>> Github: https://github.com/rmannibucau
>>> >>>
>>> >>>
>>> >>> 2014-06-27 16:22 GMT+02:00 Hendrik Dev <hendrikdev22@gmail.com>:
>>> >>>
>>> >>>> Hello,
>>> >>>>
>>> >>>> what is the main purpose of org.apache.fleece.core.BufferCache<T>?
>>> >>>>
>>> >>>> Is it to be a real thread local cache (because we excpect that
we have
>>> >>>> many parallel threads running) or just a cache which use threadlocals
>>> >>>> as a kind of an internal implementation (not expecting really
parallel
>>> >>>> threads)?
>>> >>>>
>>> >>>> The comment within this class is
>>> >>>> "// alloc are expensive so using a very trivial buffer, we remove
from
>>> >>>> TL to say we use it and reset it when we are done (close)"
>>> >>>> Which allocations are meant?  StringBuilder, char[], <T>,
...
>>> >>>>
>>> >>>> Is fleece supposed to be a threadsafe library? (Beyond wahts
marked to
>>> >>>> be thread-safe in the apispecs)
>>> >>>>
>>> >>>> Thanks
>>> >>>> Hendrik (salyh)
>>> >>>>
>>> >>
>>> >>
>>> >>
>>> >> --
>>> >> Hendrik Saly (salyh, hendrikdev22)
>>> >> @hendrikdev22
>>> >> PGP: 0x22D7F6EC
>>>
>>>
>>>
>>> --
>>> Hendrik Saly (salyh, hendrikdev22)
>>> @hendrikdev22
>>> PGP: 0x22D7F6EC
>>>
>
>
>
> --
> Hendrik Saly (salyh, hendrikdev22)
> @hendrikdev22
> PGP: 0x22D7F6EC

Mime
View raw message