kafka-users mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Ryanne Dolan <ryannedo...@gmail.com>
Subject Re: MirrorMaker 2.0 XDCR / KIP-382
Date Tue, 04 Jun 2019 17:38:13 GMT
Jeremy, please see relevant changes documented here:

https://github.com/apache/kafka/blob/cae2a5e1f0779a0889f6cb43b523ebc8a812f4c2/connect/mirror/README.md#multicluster-environments

I've added a --clusters argument which makes XDCR a lot easier to manage,
obviating the configuration race issue.

Thanks again!
Ryanne



On Thu, May 30, 2019 at 12:22 PM Ryanne Dolan <ryannedolan@gmail.com> wrote:

> Jeremy, thanks for double checking. I think you are right -- this is a
> regression introduced here [1]. For context, we noticed that heartbeats
> were only being sent to target clusters, whereas they should be sent to
> every cluster regardless of replication topology. To get heartbeats running
> everywhere, I seem to be updating the configuration across all clusters,
> yielding the behavior you are seeing.
>
> Thanks for reporting this. I'll get this fixed within the next few days
> and let you know. In the meantime, you can use the same configuration in
> both DCs, and set "tasks.max" to some high number to ensure the replication
> load is balanced across DCs.
>
> Ryanne
>
> 1:
> https://github.com/apache/kafka/pull/6295/commits/4dde001d5a521188005deb488fec5129a43eac6a#diff-823506b05664108f35046387b5fb43ecR104
>
> On Thu, May 30, 2019 at 11:53 AM Jeremy Ford <jeremy.l.ford@gmail.com>
> wrote:
>
>> Apologies, copy/paste issue.  Config should look like:
>>
>> In DC1:
>>
>> DC1->DC2.enabled = true
>> DC2->DC1.enabled = false
>>
>> In DC2:
>>
>> DC1->DC2.enabled = false
>> DC2->DC1.enabled = true
>>
>> Running 1 mm2 node in DC1 / DC2 each.  If I start up the DC1 node first,
>> then DC1 data is replicated to DC2.  DC2 data does not replicate.
>> Inverting the start order inverts the cluster that gets replicated.
>> Running the DC2 config locally and debugging it, it seems that that the
>> Source connector task is not started.  I'm wondering if somehow the two
>> DCs
>> are conflicting about what should be running since they share the same
>> group names/ connect name, etc.   I tried overriding the groud.id and
>> name
>> of the connectors which resulted in no replication.  Not quite sure what
>> could be going wrong.
>>
>>
>>
>> On Thu, May 30, 2019 at 11:26 AM Ryanne Dolan <ryannedolan@gmail.com>
>> wrote:
>>
>> > Hey Jeremy, it looks like you've got a typo or copy-paste artifact in
>> the
>> > configuration there -- you've got DC1->DC2 listed twice, but not the
>> > reverse. That would result in the behavior you are seeing, as DC1
>> actually
>> > has nothing enabled. Assuming this is just a mistake in the email, your
>> > approach is otherwise correct.
>> >
>> > Ryanne
>> >
>> >
>> > On Thu, May 30, 2019 at 7:56 AM Jeremy Ford <jeremy.l.ford@gmail.com>
>> > wrote:
>> >
>> > > I am attempting to setup a simple cross data center replication POC
>> using
>> > > the new mirror maker branch.  The behavior is not quite what I was
>> > > expecting, so it may be that I have made some assumptions in terms of
>> > > deployment that are incorrect or my setup is incorrect (see below).
>> > When I
>> > > run the two MMs, it seems like replication will work for one DC but
>> not
>> > for
>> > > the other.  If I run MM2 on just one node and enable both pairs, then
>> > > replication works as expected.  However, that deployment does not
>> match
>> > the
>> > > described setup in the KIP-382 documentation.
>> > >
>> > > Should I be using the MM driver to deploy in both DCs?  Or do I need
>> to
>> > > use a connect cluster instead?  Is my configuration (included below)
>> > > possibly incorrect?
>> > >
>> > > Thanks,
>> > >
>> > > Jeremy Ford
>> > >
>> > >
>> > >
>> > >
>> > > Setup:
>> > >
>> > > I have two data centers. I have MM2 deployed in both DCs on a single
>> > > node.  I am using the MIrrorMaker driver for the deployment.  The
>> > > configuration for both DCs is exactly the same, except the enabled
>> flag.
>> > >
>> > >
>> > > Config File:
>> > >
>> > > clusters: DC1,DC2
>> > > DC1.boostrap.servers = kafka.dc1
>> > > DC2.boostrap.servers = kafka.dc2
>> > >
>> > > DC1->DC2.topics = test
>> > > DC2->DC1.topics = test
>> > >
>> > >
>> > > In DC1:
>> > >
>> > > DC1->DC2.enabled=true
>> > > DC1->DC2.enabled=false
>> > >
>> > > In DC2:
>> > >
>> > > DC1->DC2.enabled=false
>> > > DC1->DC2.enabled=true
>> >
>>
>

Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message