cassandra-commits mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Ben Manes (JIRA)" <>
Subject [jira] Commented: (CASSANDRA-975) explore upgrading CLHM
Date Thu, 10 Jun 2010 23:05:15 GMT


Ben Manes commented on CASSANDRA-975:

That's understandable, but I was hoping it would be better. The new version has the advantage
of avoiding degradation scenarios which could affect Cassandra due to its large caches. In
the future an improved eviction policy (LIRS) is now possible, which could make up for this
slightly lower performance by increasing the hit rate. That work is still experimental though
and I've been too busy to put much effort into it.

The older version is in SECOND_CHANCE mode which does no work on a read (sets a volatile flag
per entry), but on a write it iterates over the FIFO queue to evict the first entry without
that flag set. Any entry visited that had the flag set has it unset and is resubmitted to
the tail of the queue. This allows reads to avoid locking, but it can result in flushing the
queue if all the entries were marked. That's an O(n) operation and writes are blocking, but
for small application caches (100 - 1000 avg) it isn't bad. For large caches (1M+), that could
be noticeable and unacceptable. I would also suspect that the hit-rate of the policy would
degrade with the cache's size, given that it may not follow the rule of thumb that only 10-20%
is hot, as in normal in applications. This may have been reported in Cassandra (

The new design trades off slightly higher memory usage and an amortized penalty for performing
LRU reordering for (mostly) non-blocking writes without a degradation scenario. The memory
usage and penalty could probably be improved by tweaking the heuristics (currently just magic
numbers). In a dedicated server like Cassandra, we could also experiment with using ThreadLocal
reorder buffers vs. per-segment, which would avoid contention on the buffer if that was found
to be an issue. With help, I can probably bring it on par if there is concern with the slightly
slower results.

For now, its probably a choice between potentially suffering a degradation scenario vs. slightly
slower cache operations.

> explore upgrading CLHM
> ----------------------
>                 Key: CASSANDRA-975
>                 URL:
>             Project: Cassandra
>          Issue Type: Task
>            Reporter: Jonathan Ellis
>            Assignee: Matthew F. Dennis
>            Priority: Minor
>             Fix For: 0.8
>         Attachments: 0001-trunk-975.patch, clhm_test_results.txt,,
> The new version should be substantially better "on large caches where many entries were
readon large caches where many entries were read," which is exactly what you see in our row
and key caches.
> Hopefully we can get Digg to help test, since they could reliably break CLHM when it
was buggy.

This message is automatically generated by JIRA.
You can reply to this email to add a comment to the issue online.

View raw message