cassandra-commits mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Jaakko Laine (JIRA)" <>
Subject [jira] Commented: (CASSANDRA-620) Add per-keyspace replication factor (possibly even replication strategy)
Date Wed, 13 Jan 2010 13:48:54 GMT


Jaakko Laine commented on CASSANDRA-620:

(1) ARS.replica_ cannot be used for this purpose as it might be different for two instances
of same replication strategy. For this purpose, a maximum for each type of replication strategy
in use should be used (see my previous comment above). This would allow us to calculate pending
ranges only once per replication strategy.

(2) Assigning a new value on top of the other was not a problem. If somebody was still using
the old pending ranges, let them do so. Gossip propagation and pending ranges calculation
is not very accurate in terms of timing anyway, so if somebody uses the old version for a
few microseconds more, that is OK. However, if we change the data structure when somebody
else is using it, that is different issue, I think.

(4) Yeah, basically anything that keeps track of what tables and ranges are needed from where
should work. One thing to remember, though, is that stream sources might be different for
every table, so it might be easier to just keep track on table basis what has been transferred,
instead of calculating an inverse list of what ranges from what table are needed from each
host. I think first option would just need small change to StorageService (just have addBootstrapSource
and removeBootstrapSource have "table" as extra parameter and internally have hashmap). This
would also take care of #673 (are you planning to do all my work? :-)

> Add per-keyspace replication factor (possibly even replication strategy)
> ------------------------------------------------------------------------
>                 Key: CASSANDRA-620
>                 URL:
>             Project: Cassandra
>          Issue Type: New Feature
>            Reporter: Jonathan Ellis
>            Assignee: Gary Dusbabek
>             Fix For: 0.9
>         Attachments: 0001-push-replication-factor-and-strategy-into-table-exce.patch,
0002-cleaned-up-as-much-as-possible-before-dealing-with-r.patch, 0003-push-table-names-into-streaming-expose-TMD-in-ARS.patch,
0004-fix-non-compiling-tests.patch, 0005-introduce-table-into-pending-ranges-code.patch, 0006-added-additional-testing-keyspace.patch,
0007-modify-TestRingCache-to-make-it-easier-to-test-speci.patch, 0008-push-endpoint-snitch-into-keyspace-configuration.patch,
> (but partitioner may only be cluster-wide, still)
> not 100% sure this makes sense but it would allow maintaining system metadata in a replicated-across-entire-cluster
keyspace (without ugly special casing), as well as making Cassandra more flexible as a shared
resource for multiple apps

This message is automatically generated by JIRA.
You can reply to this email to add a comment to the issue online.

View raw message