samza-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From sriram <sriram....@gmail.com>
Subject Re: soft references for object caching in the key-value storage engine
Date Thu, 12 Sep 2013 04:11:36 GMT
I am not sure about the performance difference between the object cache and
the block cache. So I would leave that decision to you. W.r.t the GC
latencies, I do think it could be an issue for long running near real time
systems. Let us consider an hypothetical case where a long GC pause of 30
seconds happens once every hour. In a day, the task is not going to do any
work for effectively 12 minutes. In other words it would lag by 12 minutes
in processing the input streams per day. This will obviously increase over
time and irrespective of how well the task keeps up, it would finally start
lagging. There are use cases for which this lag may not be acceptable. This
may or may not happen depending on the task semantics but I don't think it
could be considered as a minor issue.


On Wed, Sep 11, 2013 at 8:22 PM, Jay Kreps <jay.kreps@gmail.com> wrote:

> Sriram, yes, I think you raise the best criticism of this approach. In the
> current design the caches are per task-store combination. This is arguably
> a nightmare to tune, and in my experience people never do this kind of
> thing right, but you do at least have the ability to say X% of memory for
> store A, Y% for store B. Arguably at the store level LRU across should be
> fine (better even), but between tasks this could be an issue.
>
> Martin, both you and Sriram raise the possibility of GC latency but that is
> actually kind of a minor issue for a stream processing system (certainly in
> comparison to a real-time request-response service).
>
> Overall I think both these issues would tend to be minor because this is
> just the object cache. LevelDB still has a block cache.
>
> In either case I threw this out there more speculatively to see if anyone
> knew of any critical drawbacks.
>
> -Jay
>
>
>
>
> On Tue, Sep 10, 2013 at 11:54 AM, Martin Scholl <m@funkpopes.org> wrote:
>
> > I'm by no means a JVM expert and I am by no means able to give any final
> > judgement on this, but I can say I remember various problems people ran
> > into when using SoftReferences as well as WeakReferences.
> >
> > What a quick search yielded:
> >
> > "Soft references contribute to memory pressure but throughput collectors
> > clear them all at once when memory fills up while CMS gradually clears
> > them, so while you do get this memory sensitive gradual eviction of soft
> > reference data, you also get increased unpredictability of your garbage
> > collectors and that's not really what you want with CMS."
> > -- http://www.javaperformancetuning.com/news/newtips136.shtml
> >
> > This is a nice argument that would defeat the purpose you make up here
> > though I cannot tell if only CMS shows this behavior.
> > This having said, [1] seems to imply that SoftReferences, like
> > WeakReferences,are GC'd LRU'ish.
> >
> > My humble suggestion is to rather extend LevelDB to allow expunge data by
> > time in constant time.
> >
> >
> > Hope it helps,
> > Martin
> >
> > [1]
> >
> >
> http://stackoverflow.com/questions/299659/what-is-the-difference-between-a-soft-reference-and-a-weak-reference-in-java
> >
> >
> > On Tue, Sep 10, 2013 at 5:50 PM, Jay Kreps <jay.kreps@gmail.com> wrote:
> >
> > > One idea I had was to use soft references for object cache in key-value
> > > store. Currently we use an LRU hashmap, but the drawback of this is
> that
> > it
> > > needs to be carefully sized based on heap size and the number of
> > > partitions. It is a little hard to know when to add memory to the
> object
> > > cache vs the block cache. Plus, since the size is based both on the
> > objects
> > > in it, but also the overhead per object this is pretty impossible to
> > > calculate the worst case memory usage of N objects to make this work
> > > properly with a given heap size.
> > >
> > > Another option would be to use soft references:
> > >
> >
> http://docs.oracle.com/javase/7/docs/api/java/lang/ref/SoftReference.html
> > >
> > > Soft references will let you use all available heap space as a cache
> that
> > > gets gc'd only when strong These are usually frowned upon for caches
> due
> > to
> > > the unpredictability of the discard--basically the garbage collector
> has
> > > some heuristic by which it chooses what to discard (
> > >
> > >
> >
> http://jeremymanson.blogspot.com/2009/07/how-hotspot-decides-to-clear_07.html
> > > )
> > > but it is based on a heuristic of how much actual free memory to
> > maintain.
> > > This makes soft references a little dicey for latency sensitive
> services.
> > >
> > > But for Samza the caching is really about optimizing throughput not
> > > reducing the latency of a particular lookup. So using the rest of the
> > free
> > > memory in the heap for caching is actually attractive. It is true that
> > the
> > > garbage collector might occasionally destroy our cache but that is
> > actually
> > > okay and possibly worth getting orders of magnitude extra cache space.
> > >
> > > This does seem like the kind of thing that would have odd corner cases.
> > > Anyone have practical experience with these who can tell me why this
> is a
> > > bad idea?
> > >
> > > -Jay
> > >
> >
>

Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message