samza-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Chris Riccomini <criccom...@linkedin.com>
Subject Re: soft references for object caching in the key-value storage engine
Date Tue, 10 Sep 2013 18:43:45 GMT
Hey Jay,

Hmm. This seems cool, but I don't really know much about it. It seems like
it wouldn't be that much effort to patch the cache to run it, though.

One question I'd have is how this affects our heap usage metrics. If it
always appears that you're using 100% of the heap, it'd be nice to get
some measure of non-soft referenced usage, so we have a view of how close
we are to running out of memory in a given container. It's the same
problem as the OS cache with top's memory usage statistics.

It seems pretty straight-forward to patch locally, and try it out. Maybe
we'll learn something from that.

Cheers,
Chris

On 9/10/13 8:50 AM, "Jay Kreps" <jay.kreps@gmail.com> wrote:

>One idea I had was to use soft references for object cache in key-value
>store. Currently we use an LRU hashmap, but the drawback of this is that
>it
>needs to be carefully sized based on heap size and the number of
>partitions. It is a little hard to know when to add memory to the object
>cache vs the block cache. Plus, since the size is based both on the
>objects
>in it, but also the overhead per object this is pretty impossible to
>calculate the worst case memory usage of N objects to make this work
>properly with a given heap size.
>
>Another option would be to use soft references:
>http://docs.oracle.com/javase/7/docs/api/java/lang/ref/SoftReference.html
>
>Soft references will let you use all available heap space as a cache that
>gets gc'd only when strong These are usually frowned upon for caches due
>to
>the unpredictability of the discard--basically the garbage collector has
>some heuristic by which it chooses what to discard (
>http://jeremymanson.blogspot.com/2009/07/how-hotspot-decides-to-clear_07.h
>tml)
>but it is based on a heuristic of how much actual free memory to maintain.
>This makes soft references a little dicey for latency sensitive services.
>
>But for Samza the caching is really about optimizing throughput not
>reducing the latency of a particular lookup. So using the rest of the free
>memory in the heap for caching is actually attractive. It is true that the
>garbage collector might occasionally destroy our cache but that is
>actually
>okay and possibly worth getting orders of magnitude extra cache space.
>
>This does seem like the kind of thing that would have odd corner cases.
>Anyone have practical experience with these who can tell me why this is a
>bad idea?
>
>-Jay


Mime
View raw message