kafka-users mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Nik Hodgkinson <11x...@gmail.com>
Subject Kafka hangs on startup
Date Fri, 21 Jun 2019 08:52:01 GMT
I'm experiencing an issue I'm not sure how to start to track down. My Kafka
brokers often startup and then hang part way through startup. The log has
nothing useful to point to a problem and I'm not sure where to go from
here. I've pasted a startup log below; please advise how to proceed.

Thanks,
-Nik
11xor6@gmail.com
(913) 927-4891

--- LOG BEGINS ---
[2019-06-21 08:36:48,447] INFO Registered kafka:type=kafka.Log4jController
MBean (kafka.utils.Log4jControllerRegistration$)
[2019-06-21 08:36:48,997] INFO starting (kafka.server.KafkaServer)
[2019-06-21 08:36:49,002] INFO Connecting to zookeeper on
zookeeper-local-headless:2181/kafka-local (kafka.server.KafkaServer)
[2019-06-21 08:36:49,021] INFO [ZooKeeperClient] Initializing a new session
to zookeeper-local-headless:2181. (kafka.zookeeper.ZooKeeperClient)
[2019-06-21 08:36:49,025] INFO Client
environment:zookeeper.version=3.4.13-2d71af4dbe22557fda74f9a9b4309b15a7487f03,
built on 06/29/2018 00:39 GMT (org.apache.zookeeper.ZooKeeper)
[2019-06-21 08:36:49,025] INFO Client
environment:host.name=kafka-local-stateful-set-2.kafka-local-headless.default.svc.cluster.local
(org.apache.zookeeper.ZooKeeper)
[2019-06-21 08:36:49,025] INFO Client environment:java.version=1.8.0_212
(org.apache.zookeeper.ZooKeeper)
[2019-06-21 08:36:49,025] INFO Client environment:java.vendor=Oracle
Corporation (org.apache.zookeeper.ZooKeeper)
[2019-06-21 08:36:49,025] INFO Client
environment:java.home=/usr/lib/jvm/java-8-openjdk-amd64/jre
(org.apache.zookeeper.ZooKeeper)
[2019-06-21 08:36:49,026] INFO Client
environment:java.class.path=/opt/kafka/bin/../libs/activation-1.1.1.jar:/opt/kafka/bin/../libs/aopalliance-repackaged-2.5.0-b42.jar:/opt/kafka/bin/../libs/argparse4j-0.7.0.jar:/opt/kafka/bin/../libs/audience-annotations-0.5.0.jar:/opt/kafka/bin/../libs/commons-lang3-3.8.1.jar:/opt/kafka/bin/../libs/connect-api-2.1.1.jar:/opt/kafka/bin/../libs/connect-basic-auth-extension-2.1.1.jar:/opt/kafka/bin/../libs/connect-file-2.1.1.jar:/opt/kafka/bin/../libs/connect-json-2.1.1.jar:/opt/kafka/bin/../libs/connect-runtime-2.1.1.jar:/opt/kafka/bin/../libs/connect-transforms-2.1.1.jar:/opt/kafka/bin/../libs/guava-20.0.jar:/opt/kafka/bin/../libs/hk2-api-2.5.0-b42.jar:/opt/kafka/bin/../libs/hk2-locator-2.5.0-b42.jar:/opt/kafka/bin/../libs/hk2-utils-2.5.0-b42.jar:/opt/kafka/bin/../libs/jackson-annotations-2.9.8.jar:/opt/kafka/bin/../libs/jackson-core-2.9.8.jar:/opt/kafka/bin/../libs/jackson-databind-2.9.8.jar:/opt/kafka/bin/../libs/jackson-jaxrs-base-2.9.8.jar:/opt/kafka/bin/../libs/jackson-jaxrs-json-provider-2.9.8.jar:/opt/kafka/bin/../libs/jackson-module-jaxb-annotations-2.9.8.jar:/opt/kafka/bin/../libs/javassist-3.22.0-CR2.jar:/opt/kafka/bin/../libs/javax.annotation-api-1.2.jar:/opt/kafka/bin/../libs/javax.inject-1.jar:/opt/kafka/bin/../libs/javax.inject-2.5.0-b42.jar:/opt/kafka/bin/../libs/javax.servlet-api-3.1.0.jar:/opt/kafka/bin/../libs/javax.ws.rs-api-2.1.1.jar:/opt/kafka/bin/../libs/javax.ws.rs-api-2.1.jar:/opt/kafka/bin/../libs/jaxb-api-2.3.0.jar:/opt/kafka/bin/../libs/jersey-client-2.27.jar:/opt/kafka/bin/../libs/jersey-common-2.27.jar:/opt/kafka/bin/../libs/jersey-container-servlet-2.27.jar:/opt/kafka/bin/../libs/jersey-container-servlet-core-2.27.jar:/opt/kafka/bin/../libs/jersey-hk2-2.27.jar:/opt/kafka/bin/../libs/jersey-media-jaxb-2.27.jar:/opt/kafka/bin/../libs/jersey-server-2.27.jar:/opt/kafka/bin/../libs/jetty-client-9.4.12.v20180830.jar:/opt/kafka/bin/../libs/jetty-continuation-9.4.12.v20180830.jar:/opt/kafka/bin/../libs/jetty-http-9.4.12.v20180830.jar:/opt/kafka/bin/../libs/jetty-io-9.4.12.v20180830.jar:/opt/kafka/bin/../libs/jetty-security-9.4.12.v20180830.jar:/opt/kafka/bin/../libs/jetty-server-9.4.12.v20180830.jar:/opt/kafka/bin/../libs/jetty-servlet-9.4.12.v20180830.jar:/opt/kafka/bin/../libs/jetty-servlets-9.4.12.v20180830.jar:/opt/kafka/bin/../libs/jetty-util-9.4.12.v20180830.jar:/opt/kafka/bin/../libs/jopt-simple-5.0.4.jar:/opt/kafka/bin/../libs/kafka-clients-2.1.1.jar:/opt/kafka/bin/../libs/kafka-log4j-appender-2.1.1.jar:/opt/kafka/bin/../libs/kafka-streams-2.1.1.jar:/opt/kafka/bin/../libs/kafka-streams-examples-2.1.1.jar:/opt/kafka/bin/../libs/kafka-streams-scala_2.12-2.1.1.jar:/opt/kafka/bin/../libs/kafka-streams-test-utils-2.1.1.jar:/opt/kafka/bin/../libs/kafka-tools-2.1.1.jar:/opt/kafka/bin/../libs/kafka_2.12-2.1.1-sources.jar:/opt/kafka/bin/../libs/kafka_2.12-2.1.1.jar:/opt/kafka/bin/../libs/log4j-1.2.17.jar:/opt/kafka/bin/../libs/lz4-java-1.5.0.jar:/opt/kafka/bin/../libs/maven-artifact-3.6.0.jar:/opt/kafka/bin/../libs/metrics-core-2.2.0.jar:/opt/kafka/bin/../libs/osgi-resource-locator-1.0.1.jar:/opt/kafka/bin/../libs/plexus-utils-3.1.0.jar:/opt/kafka/bin/../libs/reflections-0.9.11.jar:/opt/kafka/bin/../libs/rocksdbjni-5.14.2.jar:/opt/kafka/bin/../libs/scala-library-2.12.7.jar:/opt/kafka/bin/../libs/scala-logging_2.12-3.9.0.jar:/opt/kafka/bin/../libs/scala-reflect-2.12.7.jar:/opt/kafka/bin/../libs/slf4j-api-1.7.25.jar:/opt/kafka/bin/../libs/slf4j-log4j12-1.7.25.jar:/opt/kafka/bin/../libs/snappy-java-1.1.7.2.jar:/opt/kafka/bin/../libs/validation-api-1.1.0.Final.jar:/opt/kafka/bin/../libs/zkclient-0.11.jar:/opt/kafka/bin/../libs/zookeeper-3.4.13.jar:/opt/kafka/bin/../libs/zstd-jni-1.3.7-1.jar:/opt/prometheus/jmx_prometheus_javaagent.jar
(org.apache.zookeeper.ZooKeeper)
[2019-06-21 08:36:49,026] INFO Client
environment:java.library.path=/usr/java/packages/lib/amd64:/usr/lib/x86_64-linux-gnu/jni:/lib/x86_64-linux-gnu:/usr/lib/x86_64-linux-gnu:/usr/lib/jni:/lib:/usr/lib
(org.apache.zookeeper.ZooKeeper)
[2019-06-21 08:36:49,027] INFO Client environment:java.io.tmpdir=/tmp
(org.apache.zookeeper.ZooKeeper)
[2019-06-21 08:36:49,027] INFO Client environment:java.compiler=<NA>
(org.apache.zookeeper.ZooKeeper)
[2019-06-21 08:36:49,027] INFO Client environment:os.name=Linux
(org.apache.zookeeper.ZooKeeper)
[2019-06-21 08:36:49,027] INFO Client environment:os.arch=amd64
(org.apache.zookeeper.ZooKeeper)
[2019-06-21 08:36:49,027] INFO Client environment:os.version=4.15.0
(org.apache.zookeeper.ZooKeeper)
[2019-06-21 08:36:49,027] INFO Client environment:user.name=kafka
(org.apache.zookeeper.ZooKeeper)
[2019-06-21 08:36:49,027] INFO Client environment:user.home=/home/kafka
(org.apache.zookeeper.ZooKeeper)
[2019-06-21 08:36:49,027] INFO Client environment:user.dir=/
(org.apache.zookeeper.ZooKeeper)
[2019-06-21 08:36:49,028] INFO Initiating client connection,
connectString=zookeeper-local-headless:2181 sessionTimeout=6000
watcher=kafka.zookeeper.ZooKeeperClient$ZooKeeperClientWatcher$@561b6512
(org.apache.zookeeper.ZooKeeper)
[2019-06-21 08:36:49,038] INFO [ZooKeeperClient] Waiting until connected.
(kafka.zookeeper.ZooKeeperClient)
[2019-06-21 08:36:49,067] INFO Client successfully logged in.
(org.apache.zookeeper.Login)
[2019-06-21 08:36:49,069] INFO Client will use DIGEST-MD5 as SASL
mechanism. (org.apache.zookeeper.client.ZooKeeperSaslClient)
[2019-06-21 08:36:49,071] INFO Opening socket connection to server
zookeeper-local-headless/172.17.0.3:2181. Will attempt to SASL-authenticate
using Login Context section 'Client' (org.apache.zookeeper.ClientCnxn)
[2019-06-21 08:36:49,073] INFO Socket connection established to
zookeeper-local-headless/172.17.0.3:2181, initiating session
(org.apache.zookeeper.ClientCnxn)
[2019-06-21 08:36:49,083] INFO Session establishment complete on server
zookeeper-local-headless/172.17.0.3:2181, sessionid = 0x200001766530005,
negotiated timeout = 6000 (org.apache.zookeeper.ClientCnxn)
[2019-06-21 08:36:49,087] INFO [ZooKeeperClient] Connected.
(kafka.zookeeper.ZooKeeperClient)
[2019-06-21 08:36:49,140] INFO Created zookeeper path /kafka-local
(kafka.server.KafkaServer)
[2019-06-21 08:36:49,141] INFO [ZooKeeperClient] Closing.
(kafka.zookeeper.ZooKeeperClient)
[2019-06-21 08:36:49,145] INFO Session: 0x200001766530005 closed
(org.apache.zookeeper.ZooKeeper)
[2019-06-21 08:36:49,145] INFO EventThread shut down for session:
0x200001766530005 (org.apache.zookeeper.ClientCnxn)
[2019-06-21 08:36:49,148] INFO [ZooKeeperClient] Closed.
(kafka.zookeeper.ZooKeeperClient)
[2019-06-21 08:36:49,149] INFO [ZooKeeperClient] Initializing a new session
to zookeeper-local-headless:2181/kafka-local.
(kafka.zookeeper.ZooKeeperClient)
[2019-06-21 08:36:49,149] INFO Initiating client connection,
connectString=zookeeper-local-headless:2181/kafka-local sessionTimeout=6000
watcher=kafka.zookeeper.ZooKeeperClient$ZooKeeperClientWatcher$@1b6e1eff
(org.apache.zookeeper.ZooKeeper)
[2019-06-21 08:36:49,152] INFO Client will use DIGEST-MD5 as SASL
mechanism. (org.apache.zookeeper.client.ZooKeeperSaslClient)
[2019-06-21 08:36:49,153] INFO [ZooKeeperClient] Waiting until connected.
(kafka.zookeeper.ZooKeeperClient)
[2019-06-21 08:36:49,153] INFO Opening socket connection to server
zookeeper-local-headless/172.17.0.4:2181. Will attempt to SASL-authenticate
using Login Context section 'Client' (org.apache.zookeeper.ClientCnxn)
[2019-06-21 08:36:49,154] INFO Socket connection established to
zookeeper-local-headless/172.17.0.4:2181, initiating session
(org.apache.zookeeper.ClientCnxn)
[2019-06-21 08:36:49,159] INFO Session establishment complete on server
zookeeper-local-headless/172.17.0.4:2181, sessionid = 0x3000017665d0001,
negotiated timeout = 6000 (org.apache.zookeeper.ClientCnxn)
[2019-06-21 08:36:49,159] INFO [ZooKeeperClient] Connected.
(kafka.zookeeper.ZooKeeperClient)
[2019-06-21 08:36:49,378] INFO Cluster ID = xs6Jfv_mRDaIR_9dZN08Uw
(kafka.server.KafkaServer)
[2019-06-21 08:36:49,382] WARN No meta.properties file under dir
/var/lib/kafka/data/meta.properties (kafka.server.BrokerMetadataCheckpoint)
[2019-06-21 08:36:49,447] INFO KafkaConfig values:
advertised.host.name = null
advertised.listeners =
INTERNAL_SASL_SSL://kafka-local-stateful-set-2.kafka-local-headless.default.svc.cluster.local:9092,LOCAL_PLAINTEXT://:9113,SECURE_SASL_SSL://:9114
advertised.port = null
alter.config.policy.class.name = null
alter.log.dirs.replication.quota.window.num = 11
alter.log.dirs.replication.quota.window.size.seconds = 1
authorizer.class.name = kafka.security.auth.SimpleAclAuthorizer
auto.create.topics.enable = false
auto.leader.rebalance.enable = true
background.threads = 10
broker.id = 2
broker.id.generation.enable = true
broker.rack = null
client.quota.callback.class = null
compression.type = producer
connection.failed.authentication.delay.ms = 100
connections.max.idle.ms = 600000
controlled.shutdown.enable = true
controlled.shutdown.max.retries = 3
controlled.shutdown.retry.backoff.ms = 5000
controller.socket.timeout.ms = 30000
create.topic.policy.class.name = null
default.replication.factor = 1
delegation.token.expiry.check.interval.ms = 3600000
delegation.token.expiry.time.ms = 86400000
delegation.token.master.key = null
delegation.token.max.lifetime.ms = 604800000
delete.records.purgatory.purge.interval.requests = 1
delete.topic.enable = false
fetch.purgatory.purge.interval.requests = 1000
group.initial.rebalance.delay.ms = 3000
group.max.session.timeout.ms = 300000
group.min.session.timeout.ms = 6000
host.name =
inter.broker.listener.name = INTERNAL_SASL_SSL
inter.broker.protocol.version = 2.1-IV2
kafka.metrics.polling.interval.secs = 10
kafka.metrics.reporters = []
leader.imbalance.check.interval.seconds = 300
leader.imbalance.per.broker.percentage = 10
listener.security.protocol.map =
INTERNAL_SASL_SSL:SASL_SSL,LOCAL_PLAINTEXT:PLAINTEXT,SECURE_SASL_SSL:SASL_SSL
listeners = INTERNAL_SASL_SSL://0.0.0.0:9092,LOCAL_PLAINTEXT://0.0.0.0:9093
,SECURE_SASL_SSL://0.0.0.0:9094
log.cleaner.backoff.ms = 15000
log.cleaner.dedupe.buffer.size = 134217728
log.cleaner.delete.retention.ms = 86400000
log.cleaner.enable = true
log.cleaner.io.buffer.load.factor = 0.9
log.cleaner.io.buffer.size = 524288
log.cleaner.io.max.bytes.per.second = 1.7976931348623157E308
log.cleaner.min.cleanable.ratio = 0.5
log.cleaner.min.compaction.lag.ms = 0
log.cleaner.threads = 1
log.cleanup.policy = [delete]
log.dir = /tmp/kafka-logs
log.dirs = /var/lib/kafka/data
log.flush.interval.messages = 9223372036854775807
log.flush.interval.ms = null
log.flush.offset.checkpoint.interval.ms = 60000
log.flush.scheduler.interval.ms = 9223372036854775807
log.flush.start.offset.checkpoint.interval.ms = 60000
log.index.interval.bytes = 4096
log.index.size.max.bytes = 10485760
log.message.downconversion.enable = true
log.message.format.version = 2.1-IV2
log.message.timestamp.difference.max.ms = 9223372036854775807
log.message.timestamp.type = CreateTime
log.preallocate = false
log.retention.bytes = -1
log.retention.check.interval.ms = 300000
log.retention.hours = 168
log.retention.minutes = null
log.retention.ms = null
log.roll.hours = 168
log.roll.jitter.hours = 0
log.roll.jitter.ms = null
log.roll.ms = null
log.segment.bytes = 1073741824
log.segment.delete.delay.ms = 60000
max.connections.per.ip = 2147483647
max.connections.per.ip.overrides =
max.incremental.fetch.session.cache.slots = 1000
message.max.bytes = 1000012
metric.reporters = []
metrics.num.samples = 2
metrics.recording.level = INFO
metrics.sample.window.ms = 30000
min.insync.replicas = 1
num.io.threads = 8
num.network.threads = 3
num.partitions = 1
num.recovery.threads.per.data.dir = 1
num.replica.alter.log.dirs.threads = null
num.replica.fetchers = 1
offset.metadata.max.bytes = 4096
offsets.commit.required.acks = -1
offsets.commit.timeout.ms = 5000
offsets.load.buffer.size = 5242880
offsets.retention.check.interval.ms = 600000
offsets.retention.minutes = 10080
offsets.topic.compression.codec = 0
offsets.topic.num.partitions = 50
offsets.topic.replication.factor = 3
offsets.topic.segment.bytes = 104857600
password.encoder.cipher.algorithm = AES/CBC/PKCS5Padding
password.encoder.iterations = 4096
password.encoder.key.length = 128
password.encoder.keyfactory.algorithm = null
password.encoder.old.secret = null
password.encoder.secret = null
port = 9092
principal.builder.class = null
producer.purgatory.purge.interval.requests = 1000
queued.max.request.bytes = -1
queued.max.requests = 500
quota.consumer.default = 9223372036854775807
quota.producer.default = 9223372036854775807
quota.window.num = 11
quota.window.size.seconds = 1
replica.fetch.backoff.ms = 1000
replica.fetch.max.bytes = 1048576
replica.fetch.min.bytes = 1
replica.fetch.response.max.bytes = 10485760
replica.fetch.wait.max.ms = 500
replica.high.watermark.checkpoint.interval.ms = 5000
replica.lag.time.max.ms = 10000
replica.socket.receive.buffer.bytes = 65536
replica.socket.timeout.ms = 30000
replication.quota.window.num = 11
replication.quota.window.size.seconds = 1
request.timeout.ms = 30000
reserved.broker.max.id = 1000
sasl.client.callback.handler.class = null
sasl.enabled.mechanisms = [PLAIN]
sasl.jaas.config = null
sasl.kerberos.kinit.cmd = /usr/bin/kinit
sasl.kerberos.min.time.before.relogin = 60000
sasl.kerberos.principal.to.local.rules = [DEFAULT]
sasl.kerberos.service.name = null
sasl.kerberos.ticket.renew.jitter = 0.05
sasl.kerberos.ticket.renew.window.factor = 0.8
sasl.login.callback.handler.class = null
sasl.login.class = null
sasl.login.refresh.buffer.seconds = 300
sasl.login.refresh.min.period.seconds = 60
sasl.login.refresh.window.factor = 0.8
sasl.login.refresh.window.jitter = 0.05
sasl.mechanism.inter.broker.protocol = PLAIN
sasl.server.callback.handler.class = null
security.inter.broker.protocol = PLAINTEXT
socket.receive.buffer.bytes = 102400
socket.request.max.bytes = 104857600
socket.send.buffer.bytes = 102400
ssl.cipher.suites = []
ssl.client.auth = none
ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
ssl.endpoint.identification.algorithm =
ssl.key.password = [hidden]
ssl.keymanager.algorithm = SunX509
ssl.keystore.location = /opt/kafka/ssl/keystore.2.jks
ssl.keystore.password = [hidden]
ssl.keystore.type = JKS
ssl.protocol = TLS
ssl.provider = null
ssl.secure.random.implementation = SHA1PRNG
ssl.trustmanager.algorithm = PKIX
ssl.truststore.location = /opt/kafka/ssl/truststore.jks
ssl.truststore.password = [hidden]
ssl.truststore.type = JKS
transaction.abort.timed.out.transaction.cleanup.interval.ms = 60000
transaction.max.timeout.ms = 900000
transaction.remove.expired.transaction.cleanup.interval.ms = 3600000
transaction.state.log.load.buffer.size = 5242880
transaction.state.log.min.isr = 2
transaction.state.log.num.partitions = 50
transaction.state.log.replication.factor = 3
transaction.state.log.segment.bytes = 104857600
transactional.id.expiration.ms = 604800000
unclean.leader.election.enable = false
zookeeper.connect = zookeeper-local-headless:2181/kafka-local
zookeeper.connection.timeout.ms = null
zookeeper.max.in.flight.requests = 10
zookeeper.session.timeout.ms = 6000
zookeeper.set.acl = true
zookeeper.sync.time.ms = 2000
 (kafka.server.KafkaConfig)
[2019-06-21 08:36:49,461] INFO KafkaConfig values:
advertised.host.name = null
advertised.listeners =
INTERNAL_SASL_SSL://kafka-local-stateful-set-2.kafka-local-headless.default.svc.cluster.local:9092,LOCAL_PLAINTEXT://:9113,SECURE_SASL_SSL://:9114
advertised.port = null
alter.config.policy.class.name = null
alter.log.dirs.replication.quota.window.num = 11
alter.log.dirs.replication.quota.window.size.seconds = 1
authorizer.class.name = kafka.security.auth.SimpleAclAuthorizer
auto.create.topics.enable = false
auto.leader.rebalance.enable = true
background.threads = 10
broker.id = 2
broker.id.generation.enable = true
broker.rack = null
client.quota.callback.class = null
compression.type = producer
connection.failed.authentication.delay.ms = 100
connections.max.idle.ms = 600000
controlled.shutdown.enable = true
controlled.shutdown.max.retries = 3
controlled.shutdown.retry.backoff.ms = 5000
controller.socket.timeout.ms = 30000
create.topic.policy.class.name = null
default.replication.factor = 1
delegation.token.expiry.check.interval.ms = 3600000
delegation.token.expiry.time.ms = 86400000
delegation.token.master.key = null
delegation.token.max.lifetime.ms = 604800000
delete.records.purgatory.purge.interval.requests = 1
delete.topic.enable = false
fetch.purgatory.purge.interval.requests = 1000
group.initial.rebalance.delay.ms = 3000
group.max.session.timeout.ms = 300000
group.min.session.timeout.ms = 6000
host.name =
inter.broker.listener.name = INTERNAL_SASL_SSL
inter.broker.protocol.version = 2.1-IV2
kafka.metrics.polling.interval.secs = 10
kafka.metrics.reporters = []
leader.imbalance.check.interval.seconds = 300
leader.imbalance.per.broker.percentage = 10
listener.security.protocol.map =
INTERNAL_SASL_SSL:SASL_SSL,LOCAL_PLAINTEXT:PLAINTEXT,SECURE_SASL_SSL:SASL_SSL
listeners = INTERNAL_SASL_SSL://0.0.0.0:9092,LOCAL_PLAINTEXT://0.0.0.0:9093
,SECURE_SASL_SSL://0.0.0.0:9094
log.cleaner.backoff.ms = 15000
log.cleaner.dedupe.buffer.size = 134217728
log.cleaner.delete.retention.ms = 86400000
log.cleaner.enable = true
log.cleaner.io.buffer.load.factor = 0.9
log.cleaner.io.buffer.size = 524288
log.cleaner.io.max.bytes.per.second = 1.7976931348623157E308
log.cleaner.min.cleanable.ratio = 0.5
log.cleaner.min.compaction.lag.ms = 0
log.cleaner.threads = 1
log.cleanup.policy = [delete]
log.dir = /tmp/kafka-logs
log.dirs = /var/lib/kafka/data
log.flush.interval.messages = 9223372036854775807
log.flush.interval.ms = null
log.flush.offset.checkpoint.interval.ms = 60000
log.flush.scheduler.interval.ms = 9223372036854775807
log.flush.start.offset.checkpoint.interval.ms = 60000
log.index.interval.bytes = 4096
log.index.size.max.bytes = 10485760
log.message.downconversion.enable = true
log.message.format.version = 2.1-IV2
log.message.timestamp.difference.max.ms = 9223372036854775807
log.message.timestamp.type = CreateTime
log.preallocate = false
log.retention.bytes = -1
log.retention.check.interval.ms = 300000
log.retention.hours = 168
log.retention.minutes = null
log.retention.ms = null
log.roll.hours = 168
log.roll.jitter.hours = 0
log.roll.jitter.ms = null
log.roll.ms = null
log.segment.bytes = 1073741824
log.segment.delete.delay.ms = 60000
max.connections.per.ip = 2147483647
max.connections.per.ip.overrides =
max.incremental.fetch.session.cache.slots = 1000
message.max.bytes = 1000012
metric.reporters = []
metrics.num.samples = 2
metrics.recording.level = INFO
metrics.sample.window.ms = 30000
min.insync.replicas = 1
num.io.threads = 8
num.network.threads = 3
num.partitions = 1
num.recovery.threads.per.data.dir = 1
num.replica.alter.log.dirs.threads = null
num.replica.fetchers = 1
offset.metadata.max.bytes = 4096
offsets.commit.required.acks = -1
offsets.commit.timeout.ms = 5000
offsets.load.buffer.size = 5242880
offsets.retention.check.interval.ms = 600000
offsets.retention.minutes = 10080
offsets.topic.compression.codec = 0
offsets.topic.num.partitions = 50
offsets.topic.replication.factor = 3
offsets.topic.segment.bytes = 104857600
password.encoder.cipher.algorithm = AES/CBC/PKCS5Padding
password.encoder.iterations = 4096
password.encoder.key.length = 128
password.encoder.keyfactory.algorithm = null
password.encoder.old.secret = null
password.encoder.secret = null
port = 9092
principal.builder.class = null
producer.purgatory.purge.interval.requests = 1000
queued.max.request.bytes = -1
queued.max.requests = 500
quota.consumer.default = 9223372036854775807
quota.producer.default = 9223372036854775807
quota.window.num = 11
quota.window.size.seconds = 1
replica.fetch.backoff.ms = 1000
replica.fetch.max.bytes = 1048576
replica.fetch.min.bytes = 1
replica.fetch.response.max.bytes = 10485760
replica.fetch.wait.max.ms = 500
replica.high.watermark.checkpoint.interval.ms = 5000
replica.lag.time.max.ms = 10000
replica.socket.receive.buffer.bytes = 65536
replica.socket.timeout.ms = 30000
replication.quota.window.num = 11
replication.quota.window.size.seconds = 1
request.timeout.ms = 30000
reserved.broker.max.id = 1000
sasl.client.callback.handler.class = null
sasl.enabled.mechanisms = [PLAIN]
sasl.jaas.config = null
sasl.kerberos.kinit.cmd = /usr/bin/kinit
sasl.kerberos.min.time.before.relogin = 60000
sasl.kerberos.principal.to.local.rules = [DEFAULT]
sasl.kerberos.service.name = null
sasl.kerberos.ticket.renew.jitter = 0.05
sasl.kerberos.ticket.renew.window.factor = 0.8
sasl.login.callback.handler.class = null
sasl.login.class = null
sasl.login.refresh.buffer.seconds = 300
sasl.login.refresh.min.period.seconds = 60
sasl.login.refresh.window.factor = 0.8
sasl.login.refresh.window.jitter = 0.05
sasl.mechanism.inter.broker.protocol = PLAIN
sasl.server.callback.handler.class = null
security.inter.broker.protocol = PLAINTEXT
socket.receive.buffer.bytes = 102400
socket.request.max.bytes = 104857600
socket.send.buffer.bytes = 102400
ssl.cipher.suites = []
ssl.client.auth = none
ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
ssl.endpoint.identification.algorithm =
ssl.key.password = [hidden]
ssl.keymanager.algorithm = SunX509
ssl.keystore.location = /opt/kafka/ssl/keystore.2.jks
ssl.keystore.password = [hidden]
ssl.keystore.type = JKS
ssl.protocol = TLS
ssl.provider = null
ssl.secure.random.implementation = SHA1PRNG
ssl.trustmanager.algorithm = PKIX
ssl.truststore.location = /opt/kafka/ssl/truststore.jks
ssl.truststore.password = [hidden]
ssl.truststore.type = JKS
transaction.abort.timed.out.transaction.cleanup.interval.ms = 60000
transaction.max.timeout.ms = 900000
transaction.remove.expired.transaction.cleanup.interval.ms = 3600000
transaction.state.log.load.buffer.size = 5242880
transaction.state.log.min.isr = 2
transaction.state.log.num.partitions = 50
transaction.state.log.replication.factor = 3
transaction.state.log.segment.bytes = 104857600
transactional.id.expiration.ms = 604800000
unclean.leader.election.enable = false
zookeeper.connect = zookeeper-local-headless:2181/kafka-local
zookeeper.connection.timeout.ms = null
zookeeper.max.in.flight.requests = 10
zookeeper.session.timeout.ms = 6000
zookeeper.set.acl = true
zookeeper.sync.time.ms = 2000
 (kafka.server.KafkaConfig)
[2019-06-21 08:36:49,496] INFO [ThrottledChannelReaper-Fetch]: Starting
(kafka.server.ClientQuotaManager$ThrottledChannelReaper)
[2019-06-21 08:36:49,497] INFO [ThrottledChannelReaper-Produce]: Starting
(kafka.server.ClientQuotaManager$ThrottledChannelReaper)
[2019-06-21 08:36:49,499] INFO [ThrottledChannelReaper-Request]: Starting
(kafka.server.ClientQuotaManager$ThrottledChannelReaper)
[2019-06-21 08:36:49,533] INFO Loading logs. (kafka.log.LogManager)
[2019-06-21 08:36:49,542] INFO Logs loading complete in 9 ms.
(kafka.log.LogManager)
[2019-06-21 08:36:49,559] INFO Starting log cleanup with a period of 300000
ms. (kafka.log.LogManager)
[2019-06-21 08:36:49,565] INFO Starting log flusher with a default period
of 9223372036854775807 ms. (kafka.log.LogManager)
[2019-06-21 08:36:50,000] INFO Awaiting socket connections on 0.0.0.0:9092.
(kafka.network.Acceptor)
[2019-06-21 08:36:50,014] INFO Successfully logged in.
(org.apache.kafka.common.security.authenticator.AbstractLogin)
--- LOG ENDS ---

Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message