kafka-users mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Nazario Parsacala <dodongj...@gmail.com>
Subject Re: Kafka SSL Configuration Problems
Date Mon, 01 Feb 2016 18:30:40 GMT
Ok, This is getting interesting .. On the broker side, it is saying that it is registering
 9092 as PLAINTEXT and 9093 as SSL

[2016-02-01 13:26:33,796] INFO Registered broker 0 at path /brokers/ids/0 with addresses:
PLAINTEXT -> EndPoint(servername,9092,PLAINTEXT),SSL -> EndPoint(servername,9093,SSL)
(kafka.utils.ZkUtils)

But if you check on the ports that is open by the broker., you only see port 9092 ..

lsof -p 7675 | grep LIST
java    7675 bushido   67u  IPv6             110567      0t0      TCP *:45688 (LISTEN)
java    7675 bushido   96u  IPv6             113359      0t0      TCP servername:9092 (LISTEN)


Why ..?


> On Feb 1, 2016, at 1:16 PM, Nazario Parsacala <dodongjuan@gmail.com> wrote:
> 
> No juice.
> 
> /kafka-topics.sh --describe --topic anotherone  --zookeeper localhost:2181
> Topic:anotherone	PartitionCount:4	ReplicationFactor:1	Configs:
> 	Topic: anotherone	Partition: 0	Leader: 0	Replicas: 0	Isr: 0
> 	Topic: anotherone	Partition: 1	Leader: 0	Replicas: 0	Isr: 0
> 	Topic: anotherone	Partition: 2	Leader: 0	Replicas: 0	Isr: 0
> 	Topic: anotherone	Partition: 3	Leader: 0	Replicas: 0	Isr: 0
> 
> Same error.
> 
> bin/kafka-console-producer.sh --broker-list servername:9093 --topic anotherone --producer.config
config/client-ssl.properties 
> [2016-02-01 13:09:45,205] ERROR Error when sending message to topic anotherone with key:
null, value: 0 bytes with error: Failed to update metadata after 60000 ms. (org.apache.kafka.clients.producer.internals.ErrorLoggingCallback)
> [2016-02-01 13:10:45,206] ERROR Error when sending message to topic anotherone with key:
null, value: 0 bytes with error: Failed to update metadata after 60000 ms. (org.apache.kafka.clients.producer.internals.ErrorLoggingCallback)
> 
> 
> I have read from somewhere that  you need to configure configure meta.broker.list ? Is
this true ? Anyways tried setting that too with no luck.
> 
> 
> 
>> On Feb 1, 2016, at 1:02 PM, Anirudh P <panirudh2001@gmail.com <mailto:panirudh2001@gmail.com>>
wrote:
>> 
>> Hello Nazario,
>> 
>> Could you try it by creating a new topic?
>> 
>> Thank you,
>> Anirudh
>> That works. At least it is saying that it is registering now with the SSL
>> side.
>> 
>> 
>> [2016-02-01 12:29:40,184] INFO Registered broker 0 at path /brokers/ids/0
>> with addresses: PLAINTEXT -> EndPoint(servername,9092,PLAINTEXT),SSL ->
>> EndPoint(servername,9093,SSL) (kafka.utils.ZkUtils)
>> 
>> 
>> Thank you.
>> 
>> Now to the next problem. :-) Still related to SSL.
>> 
>> 
>> The producer is not giving any more LEADER_NOT_AVAILABLE errors. but is now
>> having this problem instead.
>> 
>> [2016-02-01 12:41:59,273] ERROR Error when sending message to topic test
>> with key: null, value: 5 bytes with error: Failed to update metadata after
>> 60000 ms. (org.apache.kafka.clients.producer.internals.ErrorLoggingCallback)
>> [2016-02-01 12:42:59,274] ERROR Error when sending message to topic test
>> with key: null, value: 7 bytes with error: Failed to update metadata after
>> 60000 ms. (org.apache.kafka.clients.producer.internals.ErrorLoggingCallback)
>> [2016-02-01 12:43:59,275] ERROR Error when sending message to topic test
>> with key: null, value: 0 bytes with error: Failed to update metadata after
>> 60000 ms. (org.apache.kafka.clients.producer.internals.ErrorLoggingCallback)
>> 
>> 
>> Consumer is connecting too but not receiving any data
>> 
>> 
>> 
>> 
>>> On Feb 1, 2016, at 12:15 PM, Ismael Juma <ismael@juma.me.uk <mailto:ismael@juma.me.uk>>
wrote:
>>> 
>>> Please use advertised.listeners instead of advertised.host.name. See this
>>> comment:
>>> 
>>> https://github.com/apache/kafka/pull/793#issuecomment-174287124 <https://github.com/apache/kafka/pull/793#issuecomment-174287124>
>>> 
>>> Ismael
>>> 
>>> On Mon, Feb 1, 2016 at 4:44 PM, Nazario Parsacala <dodongjuan@gmail.com>
>>> wrote:
>>> 
>>>> Hi,
>>>> 
>>>> We were using kafka for a while now. We have been using the binary
>> release
>>>> 2.10-0.8.2.1 . But we have been needing a encrypted communication between
>>>> our publishers and subscribers. So we got 2.10-0.9.0.0. This works very
>>>> well with no SSL enabled. But currently have issues with SSL enabled.
>>>> 
>>>> So configured SSL according to
>>>> http://kafka.apache.org/documentation.html#security <http://kafka.apache.org/documentation.html#security>
. And only place the
>>>> following changes in the server.properties to enable SSL
>>>> 
>>>> listeners=PLAINTEXT://servername:9092 <plaintext://servername:9092>,
SSL://servername:9093 <ssl://servername:9093>
>>>> 
>>>> # The port the socket server listens on
>>>> #port=9092
>>>> 
>>>> # Hostname the broker will bind to. If not set, the server will bind to
>>>> all interfaces
>>>> host.name=servername
>>>> 
>>>> 
>>>> 
>>>> 
>>>> 
>>>> # SSL Stuff
>>>> #
>>>> ssl.client.auth=required
>>>> ssl.enabled.protocols=TLSv1.2,TLSv1.1,TLSv1
>>>> ssl.keystore.location=/pathto/certs/server.keystore.jks
>>>> ssl.keystore.password=123456
>>>> ssl.key.password=123456
>>>> ssl.truststore.location=/pathto/certs/server.truststore.jks
>>>> ssl.truststore.password=123456
>>>> 
>>>> 
>>>> At start up I see the following in the logs:
>>>> 
>>>> 
>>>>       advertised.host.name = servername
>>>>       metric.reporters = []
>>>>       quota.producer.default = 9223372036854775807
>>>>       offsets.topic.num.partitions = 50
>>>>       log.flush.interval.messages = 9223372036854775807
>>>>       auto.create.topics.enable = true
>>>>       controller.socket.timeout.ms = 30000
>>>>       log.flush.interval.ms = null
>>>>       principal.builder.class = class
>>>> org.apache.kafka.common.security.auth.DefaultPrincipalBuilder
>>>>       replica.socket.receive.buffer.bytes = 65536
>>>>       min.insync.replicas = 1
>>>>       replica.fetch.wait.max.ms = 500
>>>>       num.recovery.threads.per.data.dir = 1
>>>>       ssl.keystore.type = JKS
>>>>       default.replication.factor = 1
>>>>       ssl.truststore.password = [hidden]
>>>>       log.preallocate = false
>>>>       sasl.kerberos.principal.to.local.rules = [DEFAULT]
>>>>       fetch.purgatory.purge.interval.requests = 1000
>>>>       ssl.endpoint.identification.algorithm = null
>>>>       replica.socket.timeout.ms = 30000
>>>>       message.max.bytes = 1000012
>>>>       num.io.threads = 8
>>>>       offsets.commit.required.acks = -1
>>>>       log.flush.offset.checkpoint.interval.ms = 60000
>>>>       delete.topic.enable = false
>>>>       quota.window.size.seconds = 1
>>>>       ssl.truststore.type = JKS
>>>>       offsets.commit.timeout.ms = 5000
>>>>       quota.window.num = 11
>>>>       zookeeper.connect = servername:2181
>>>>       authorizer.class.name =
>>>>       num.replica.fetchers = 1
>>>>       log.retention.ms = null
>>>>       log.roll.jitter.hours = 0
>>>>       log.cleaner.enable = false
>>>>       offsets.load.buffer.size = 5242880
>>>>       log.cleaner.delete.retention.ms = 86400000
>>>>       ssl.client.auth = required
>>>>       controlled.shutdown.max.retries = 3
>>>>       queued.max.requests = 500
>>>>       offsets.topic.replication.factor = 3
>>>>       log.cleaner.threads = 1
>>>>       sasl.kerberos.service.name = null
>>>>       sasl.kerberos.ticket.renew.jitter = 0.05
>>>>       socket.request.max.bytes = 104857600
>>>>       ssl.trustmanager.algorithm = PKIX
>>>>       zookeeper.session.timeout.ms = 6000
>>>>       log.retention.bytes = -1
>>>>       sasl.kerberos.min.time.before.relogin = 60000
>>>>       zookeeper.set.acl = false
>>>>       connections.max.idle.ms = 600000
>>>>       offsets.retention.minutes = 1440
>>>>       replica.fetch.backoff.ms = 1000
>>>>       inter.broker.protocol.version = 0.9.0.X
>>>>       log.retention.hours = 168
>>>>       num.partitions = 4
>>>>       listeners = PLAINTEXT://servername:9092 <plaintext://servername:9092>,
SSL://servername:9093 <ssl://servername:9093>
>>>>       ssl.provider = null
>>>>       ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
>>>>       log.roll.ms = null
>>>>       log.flush.scheduler.interval.ms = 9223372036854775807
>>>>       ssl.cipher.suites = null
>>>>       log.index.size.max.bytes = 10485760
>>>>       ssl.keymanager.algorithm = SunX509
>>>>       security.inter.broker.protocol = PLAINTEXT
>>>>       replica.fetch.max.bytes = 1048576
>>>>       advertised.port = null
>>>>       log.cleaner.dedupe.buffer.size = 524288000
>>>>       replica.high.watermark.checkpoint.interval.ms = 5000
>>>>       log.cleaner.io.buffer.size = 524288
>>>>       sasl.kerberos.ticket.renew.window.factor = 0.8
>>>>       zookeeper.connection.timeout.ms = 6000
>>>>       controlled.shutdown.retry.backoff.ms = 5000
>>>>       log.roll.hours = 168
>>>>       log.cleanup.policy = delete
>>>>       host.name = servername
>>>>       log.roll.jitter.ms = null
>>>>       max.connections.per.ip = 2147483647
>>>>       offsets.topic.segment.bytes = 104857600
>>>>       background.threads = 10
>>>>       quota.consumer.default = 9223372036854775807
>>>>       request.timeout.ms = 30000
>>>>       log.index.interval.bytes = 4096
>>>>       log.dir = /tmp/kafka-logs
>>>>       log.segment.bytes = 1073741824
>>>>       log.cleaner.backoff.ms = 15000
>>>>       offset.metadata.max.bytes = 4096
>>>>       ssl.truststore.location = /pathto/certs/server.truststore.jks
>>>>       group.max.session.timeout.ms = 30000
>>>>       ssl.keystore.password = [hidden]
>>>>       zookeeper.sync.time.ms = 2000
>>>>       port = 9092
>>>>       log.retention.minutes = null
>>>>       log.segment.delete.delay.ms = 60000
>>>>       log.dirs = /pathto/logs/kafka
>>>>       controlled.shutdown.enable = true
>>>>       compression.type = producer
>>>>       max.connections.per.ip.overrides =
>>>>       sasl.kerberos.kinit.cmd = /usr/bin/kinit
>>>>       log.cleaner.io.max.bytes.per.second = 1.7976931348623157E308
>>>>       auto.leader.rebalance.enable = true
>>>>       leader.imbalance.check.interval.seconds = 300
>>>>       log.cleaner.min.cleanable.ratio = 0.5
>>>>       replica.lag.time.max.ms = 10000
>>>>       num.network.threads = 3
>>>>       ssl.key.password = [hidden]
>>>>       reserved.broker.max.id = 1000
>>>>       metrics.num.samples = 2
>>>>       socket.send.buffer.bytes = 102400
>>>>       ssl.protocol = TLS
>>>>       socket.receive.buffer.bytes = 102400
>>>>       ssl.keystore.location = /pathto/certs/server.keystore.jks
>>>>       replica.fetch.min.bytes = 1
>>>>       unclean.leader.election.enable = true
>>>>       group.min.session.timeout.ms = 6000
>>>>       log.cleaner.io.buffer.load.factor = 0.9
>>>>       offsets.retention.check.interval.ms = 600000
>>>>       producer.purgatory.purge.interval.requests = 1000
>>>> 
>>>> 
>>>> 
>>>> So as you can see the listeners are supposedly setup as
>>>> 
>>>>       listeners = PLAINTEXT://servername:9092 <plaintext://servername:9092>,
SSL://servername:9093 <ssl://servername:9093>
>>>> 
>>>> in the logs which reflected what was setup in the server.properties.
>>>> 
>>>> However further down the logs, it is only PLAINTEXT that is being
>>>> registered ..
>>>> 
>>>> [2016-02-01 11:27:49,712] INFO Registered broker 0 at path /brokers/ids/0
>>>> with addresses: PLAINTEXT -> EndPoint(servername,9092,PLAINTEXT)
>>>> (kafka.utils.ZkUtils)
>>>> 
>>>> 
>>>> not the port 9093 nor the SSL.
>>>> 
>>>> I have done multiple permutations of this config including clearing the
>>>> entire kafka and zookeeper data. Still no luck. I even forced the the SSL
>>>> on port 9092 with the same issue. The resulting effect on this is that
>> the
>>>> producer and consumer is giving me errors like :
>>>> 
>>>> lients.NetworkClient)
>>>> [2016-02-01 10:58:41,001] WARN Error while fetching metadata with
>>>> correlation id 57 : {test=LEADER_NOT_AVAILABLE}
>>>> (org.apache.kafka.clients.NetworkClient)
>>>> [2016-02-01 10:58:41,103] WARN Error while fetching metadata with
>>>> correlation id 58 : {test=LEADER_NOT_AVAILABLE}
>>>> (org.apache.kafka.clients.NetworkClient)
>>>> [2016-02-01 10:58:41,205] WARN Error while fetching metadata with
>>>> correlation id 59 : {test=LEADER_NOT_AVAILABLE}
>>>> (org.apache.kafka.clients.NetworkClient)
>>>> 
>>>> 
>>>> Any help is appreciated.
>>>> 
>>>> 
> 


Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message