kafka-users mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Jun Rao <jun...@gmail.com>
Subject Re: fidelity of offsets when mirroring
Date Wed, 05 Mar 2014 04:42:30 GMT
Currently, message offsets are not preserved by mirror maker.

You can potentially do the failover based on the failover time. Suppose
that the consumption in A failed at time t. You find the offset before time
t using our getOffsetBefore api to get the starting offset in B. Then, you
have to manually import these offsets into ZK and then start the consumer.



On Tue, Mar 4, 2014 at 3:23 PM, Seth White <seth.white@salesforce.com>wrote:

> Hi,
> I have a question about mirroring.   I would like to create a highly
> available Kafka service that runs on AWS and can survive an AZ failure.
> Based on what I've read, I plan to create a Kafka cluster in each AZ and
> use mirror maker to replicate one cluster to the other.   I'll call the two
> clusters in their respective availability zones A and B. A is the primary
> which is replicated to B.  Normally, all consumers consume from A and
> record their current offset in a persistent store that is replicated across
> A and B (like Dynamo).   If I detect that A  has failed producers and
> consumers will fail over to B.   That's the basic idea.
> Now, the question:   Can I rely on the offset that is being stored in the
> persistent store to refer to the same event in each cluster?   Or is it
> possible for the two to get out of sync over time - I don't know why,
> failures of some kind maybe - in which case the offset from A  might not
> really be valid with respect to the replica B.   If that is possible, then
> I'm wondering what I can/should do about it  to achieve a clean failover.
> I realize that the replication may lag behind, so some events from A  make
> be lost when there is a failover. That is okay.
> I've been told that creating a single cluster that spans AZs  and relying
> on the new replication functionality in 0.8 is a bad idea, as zookeeper
> isn't well behaved in that case.   Hence my alternative design.
> Thanks in advance.
> Seth

  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message