kafka-users mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Lance Laursen <llaur...@rubiconproject.com>
Subject Re: managing replication between nodes on specific NICs on a host
Date Fri, 04 Dec 2015 22:37:54 GMT

You're not going to be able to get it to do what you want. When a publisher
queries a list of seed brokers for cluster information, the host names that
are passed back in the metadata response are the same hostnames the brokers
use to talk to and replicate between each other. Run zkCli.sh then "get
/brokers/ids/0" to see this information. You can change what gets set here
by modifying advertised.host.name in your server.properties. The only way
to get around this would be to set /etc/hosts entries on your publishers
such that the hostname passed back gets resolved to your "unused"
interface. This is obviously a terrible idea, and as such if you have RHEL7
or something new that supports team driver, use that. Otherwise use the
older bonding driver. Make sure your switch supports some sort of aggregate
protocol like LACP.

On Fri, Dec 4, 2015 at 1:02 PM, scott macfawn <scottmacfawn@gmail.com>

> I am attempting to see if i can maximize throughput on 1GB interfaces. Each
> of my brokers has 2 1GB interfaces on it, on two different subnet, with two
> different CNAMEs. Currently, I am seeing that one of the interfaces is
> taking 90% of the traffic where the other is basically taking 10% of the
> traffic. What I am curious about is if anyone has been able to overcome
> that by modifying the server.properties file to specify the host.name
> property to be one of the interfaces so that the broker does not bind to
> all interfaces on the host, and then modified the advertised.port property
> to advertise the other network to the zookeepers. Ideally i would like to
> split the traffic 50-50 if possible. By having all of the producers writing
> to the brokers on one interface, and then for all replication between the
> data nodes to take place on the other interface.
> Has anyone tried anything like this?  My other thought is to test "teaming"
> the network cards and assigning a vIP to them and essentially having a 2GB
> interface.
> /Scott

  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message