kafka-users mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From costa xu <xxb.sk...@gmail.com>
Subject Re: How to bind all Kafka tcp port to private net address
Date Tue, 02 Feb 2016 01:22:38 GMT
yes, the host.name is useless in this case.
Even if I set the host.name=private ip, the broker also bind on 0.0.0.0

2016-02-01 23:27 GMT+08:00 John Prout <John.Prout@acxiom.com>:

> I have set the host.name option in the server.properties file, but the
> Broker is still binding to all interfaces, and logging that's what it is
> doing.
>
> This is with kafka 0.9.0 running on a Solaris 10 server with 3 Virtual
> interfaces installed, in addition to the Physical interface.
>
> John
>
> -----Original Message-----
> From: Stephen Powis [mailto:spowis@salesforce.com]
> Sent: Friday, January 29, 2016 10:03 AM
> To: users@kafka.apache.org
> Subject: Re: How to bind all Kafka tcp port to private net address
>
> Pretty sure you want to set this option in your server.properties file:
>
> # Hostname the broker will bind to. If not set, the server will bind to all
> > interfaces
> > #host.name=localhost
> >
>
> On Thu, Jan 28, 2016 at 10:58 PM, costa xu <xxb.sklse@gmail.com> wrote:
>
> > My version is kafka_2.11-0.9.0.0. I find that the kafka listen on
> > multi tcp port on a linux server.
> >
> > [gdata@gdataqosconnd2 kafka_2.11-0.9.0.0]$ netstat -plnt|grep java
> > (Not all processes could be identified, non-owned process info  will
> > not be shown, you would have to be root to see it all.)
> > tcp        0      0 10.105.7.243:9092       0.0.0.0:*
> > LISTEN      31011/java
> > tcp        0      0 0.0.0.0:51367           0.0.0.0:*
> > LISTEN      31011/java
> > tcp        0      0 0.0.0.0:1105            0.0.0.0:*
> > LISTEN      31011/java
> > tcp        0      0 0.0.0.0:42592           0.0.0.0:*
> > LISTEN      31011/java
> >
> > 10.105.7.243:9092 is the broker's port.0 0.0.0.0:1105 is the jmx port
> > that I set in the start script.
> > But I dont know what is the 0 0.0.0.0:51367 and 0 0.0.0.0:42592. And
> > more tricky, the port will change after restarting of the kafka process.
> >
> > So  I want to know how to bind the kafka port to private interface
> > just like '10.105.7.243'.
> > If I can not bind them, can I set the fixed listened port number?
> >
> > My kafka server.properties is:
> > # Licensed to the Apache Software Foundation (ASF) under one or more #
> > contributor license agreements.  See the NOTICE file distributed with
> > # this work for additional information regarding copyright ownership.
> > # The ASF licenses this file to You under the Apache License, Version
> > 2.0 # (the "License"); you may not use this file except in compliance
> > with # the License.  You may obtain a copy of the License at #
> > #    http://www.apache.org/licenses/LICENSE-2.0
> > #
> > # Unless required by applicable law or agreed to in writing, software
> > # distributed under the License is distributed on an "AS IS" BASIS, #
> > WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
> > # See the License for the specific language governing permissions and
> > # limitations under the License.
> > # see kafka.server.KafkaConfig for additional details and defaults
> >
> > ############################# Server Basics
> > #############################
> >
> > # The id of the broker. This must be set to a unique integer for each
> > broker.
> > broker.id=1
> >
> > ############################# Socket Server Settings
> > #############################
> >
> > listeners=PLAINTEXT://10.105.7.243:9092
> >
> > # The port the socket server listens on
> > #port=9092
> >
> > # Hostname the broker will bind to. If not set, the server will bind
> > to all interfaces #host.name=localhost
> >
> > # Hostname the broker will advertise to producers and consumers. If
> > not set, it uses the # value for "host.name" if configured.
> > Otherwise, it will use the value returned from #
> > java.net.InetAddress.getCanonicalHostName().
> > #advertised.host.name=<hostname routable by clients>
> >
> > # The port to publish to ZooKeeper for clients to use. If this is not
> > set, # it will publish the same port that the broker binds to.
> > #advertised.port=<port accessible by clients>
> >
> > # The number of threads handling network requests
> > num.network.threads=3
> >
> > # The number of threads doing disk I/O
> > num.io.threads=8
> >
> > # The send buffer (SO_SNDBUF) used by the socket server
> > socket.send.buffer.bytes=102400
> >
> > # The receive buffer (SO_RCVBUF) used by the socket server
> > socket.receive.buffer.bytes=102400
> >
> > # The maximum size of a request that the socket server will accept
> > (protection against OOM)
> > socket.request.max.bytes=104857600
> >
> >
> > ############################# Log Basics #############################
> >
> > # A comma seperated list of directories under which to store log files
> > log.dirs=/data/gdata/var/kafka-logs
> >
> > # The default number of log partitions per topic. More partitions
> > allow greater # parallelism for consumption, but this will also result
> > in more files across # the brokers.
> > num.partitions=1
> >
> > # The number of threads per data directory to be used for log recovery
> > at startup and flushing at shutdown.
> > # This value is recommended to be increased for installations with
> > data dirs located in RAID array.
> > num.recovery.threads.per.data.dir=1
> >
> > ############################# Log Flush Policy
> > #############################
> >
> > # Messages are immediately written to the filesystem but by default we
> > only
> > fsync() to sync
> > # the OS cache lazily. The following configurations control the flush
> > of data to disk.
> > # There are a few important trade-offs here:
> > #    1. Durability: Unflushed data may be lost if you are not using
> > replication.
> > #    2. Latency: Very large flush intervals may lead to latency spikes
> when
> > the flush does occur as there will be a lot of data to flush.
> > #    3. Throughput: The flush is generally the most expensive operation,
> > and a small flush interval may lead to exceessive seeks.
> > # The settings below allow one to configure the flush policy to flush
> > data after a period of time or # every N messages (or both). This can
> > be done globally and overridden on a per-topic basis.
> >
> > # The number of messages to accept before forcing a flush of data to
> > disk
> > #log.flush.interval.messages=10000
> >
> > # The maximum amount of time a message can sit in a log before we
> > force a flush
> > #log.flush.interval.ms=1000
> >
> > ############################# Log Retention Policy
> > #############################
> >
> > # The following configurations control the disposal of log segments.
> > The policy can # be set to delete segments after a period of time, or
> > after a given size has accumulated.
> > # A segment will be deleted whenever *either* of these criteria are met.
> > Deletion always happens
> > # from the end of the log.
> >
> > # The minimum age of a log file to be eligible for deletion
> > log.retention.hours=48
> >
> > # A size-based retention policy for logs. Segments are pruned from the
> > log as long as the remaining # segments don't drop below
> > log.retention.bytes.
> > #log.retention.bytes=1073741824
> >
> > # The maximum size of a log segment file. When this size is reached a
> > new log segment will be created.
> > log.segment.bytes=1073741824
> >
> > # The interval at which log segments are checked to see if they can be
> > deleted according # to the retention policies
> > log.retention.check.interval.ms=300000
> >
> > # By default the log cleaner is disabled and the log retention policy
> > will default to just delete segments after their retention expires.
> > # If log.cleaner.enable=true is set the cleaner will be enabled and
> > individual logs can then be marked for log compaction.
> > log.cleaner.enable=false
> >
> > ############################# Zookeeper #############################
> >
> > # Zookeeper connection string (see zookeeper docs for details).
> > # This is a comma separated host:port pairs, each corresponding to a
> > zk # server. e.g. "127.0.0.1:3000,127.0.0.1:3001,127.0.0.1:3002".
> > # You can also append an optional chroot string to the urls to specify
> > the # root directory for all kafka znodes.
> > zookeeper.connect=10.131.208.195:1888,10.105.46.165:1888,
> > 10.105.52.174:1888
> >
> > # Timeout in ms for connecting to zookeeper
> > zookeeper.connection.timeout.ms=6000
> >
> >
> >
> > And the start script:
> > [gdata@gdataqosconnd2 kafka_2.11-0.9.0.0]$ cat
> > bin/kafka-server-start.sh #!/bin/bash # Licensed to the Apache
> > Software Foundation (ASF) under one or more # contributor license
> > agreements.  See the NOTICE file distributed with # this work for
> > additional information regarding copyright ownership.
> > # The ASF licenses this file to You under the Apache License, Version
> > 2.0 # (the "License"); you may not use this file except in compliance
> > with # the License.  You may obtain a copy of the License at #
> > #    http://www.apache.org/licenses/LICENSE-2.0
> > #
> > # Unless required by applicable law or agreed to in writing, software
> > # distributed under the License is distributed on an "AS IS" BASIS, #
> > WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
> > # See the License for the specific language governing permissions and
> > # limitations under the License.
> >
> > if [ $# -lt 1 ];
> > then
> >     echo "USAGE: $0 [-daemon] server.properties [--override
> > property=value]*"
> >     exit 1
> > fi
> > base_dir=$(dirname $0)
> >
> > if [ "x$KAFKA_LOG4J_OPTS" = "x" ]; then
> >     export
> >
> >
> KAFKA_LOG4J_OPTS="-Dlog4j.configuration=file:$base_dir/../config/log4j.properties"
> > fi
> >
> > if [ "x$KAFKA_HEAP_OPTS" = "x" ]; then
> >     export KAFKA_HEAP_OPTS="-Xmx1G -Xms1G"
> > fi
> >
> > EXTRA_ARGS="-name kafkaServer -loggc
> > -Dcom.sun.management.jmxremote.port=1105
> > -Dcom.sun.management.jmxremote=true
> > -Dcom.sun.management.jmxremote.authenticate=false
> > -Dcom.sun.management.jmxremote.ssl=false"
> >
> > COMMAND=$1
> > case $COMMAND in
> >   -daemon)
> >     EXTRA_ARGS="-daemon "$EXTRA_ARGS
> >     shift
> >     ;;
> >   *)
> >     ;;
> > esac
> >
> > exec $base_dir/kafka-run-class.sh $EXTRA_ARGS kafka.Kafka $@
> >
> ***************************************************************************
> The information contained in this communication is confidential, is
> intended only for the use of the recipient named above, and may be legally
> privileged.
>
> If the reader of this message is not the intended recipient, you are
> hereby notified that any dissemination, distribution or copying of this
> communication is strictly prohibited.
>
> If you have received this communication in error, please resend this
> communication to the sender and delete the original message or any copy
> of it from your computer system.
>
> Thank You.
>
> ****************************************************************************
>

Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message