kafka-users mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From MAHA ALSAYASNEH <maha.alsayas...@univ-grenoble-alpes.fr>
Subject Re: Question about Kafka
Date Tue, 19 Sep 2017 16:18:25 GMT

Well I kept the defualt: 
log.retention.hours=168 


Here are my broker configurations: 

############################# Server Basics ############################# 

# The id of the broker. This must be set to a unique integer for each broker. 
broker.id=3 
host.name=xxxx 

port=9092 
zookeeper.connect=xxx:2181,xxxx:2181,xxxx:2181 

#The maximum size of message that the server can receive 
message.max.bytes=2000024 


eplica.fetch.max.bytes=2000024 
request.timeout.ms=300000 
log.flush.interval.ms=10000 
log.flush.interval.messages=20000 

request.timeout.ms=300000 

#replica.socket.timeout.ms=60000 
#linger.ms=30000 

# Switch to enable topic deletion or not, default value is false 
delete.topic.enable=true 

############################# Socket Server Settings ############################# 

# The address the socket server listens on. It will get the value returned from 
# java.net.InetAddress.getCanonicalHostName() if not configured. 
# FORMAT: 
# listeners = security_protocol://host_name:port 
# EXAMPLE: 
# listeners = PLAINTEXT://your.host.name:9092 
listeners=PLAINTEXT://x.x.x.X:9092 


# Hostname and port the broker will advertise to producers and consumers. If not set, 
# it uses the value for "listeners" if configured. Otherwise, it will use the value 
# returned from java.net.InetAddress.getCanonicalHostName(). 
#advertised.listeners=PLAINTEXT://your.host.name:9092 

# The number of threads handling network requests 
num.network.threads=4 

# The number of threads doing disk I/O 
num.io.threads=8 

# The send buffer (SO_SNDBUF) used by the socket server 
socket.send.buffer.bytes=102400 

# The receive buffer (SO_RCVBUF) used by the socket server 
socket.receive.buffer.bytes=102400 

# The maximum size of a request that the socket server will accept (protection against OOM)

socket.request.max.bytes=104857600 


############################# Log Basics ############################# 

# A comma seperated list of directories under which to store log files 
log.dirs=/tmp/kafka-logs 


# The default number of log partitions per topic. More partitions allow greater 
# parallelism for consumption, but this will also result in more files across 
# the brokers. 
num.partitions=8 

# The number of threads per data directory to be used for log recovery at startup and flushing
at shutdown. 
# This value is recommended to be increased for installations with data dirs located in RAID
array. 
num.recovery.threads.per.data.dir=1 

############################# Log Flush Policy ############################# 

# Messages are immediately written to the filesystem but by default we only fsync() to sync

# the OS cache lazily. The following configurations control the flush of data to disk. 
# There are a few important trade-offs here: 
# 1. Durability: Unflushed data may be lost if you are not using replication. 
# 2. Latency: Very large flush intervals may lead to latency spikes when the flush does occur
as there will be a lot of data to flush. 
# 3. Throughput: The flush is generally the most expensive operation, and a small flush interval
may lead to exceessive seeks. 
# The settings below allow one to configure the flush policy to flush data after a period
of time or 
# every N messages (or both). This can be done globally and overridden on a per-topic basis.


# The number of messages to accept before forcing a flush of data to disk 
#log.flush.interval.messages=10000 

# The maximum amount of time a message can sit in a log before we force a flush 
#log.flush.interval.ms=1000 

############################# Log Retention Policy ############################# 

# The following configurations control the disposal of log segments. The policy can 
# be set to delete segments after a period of time, or after a given size has accumulated.

# A segment will be deleted whenever *either* of these criteria are met. Deletion always happens

# from the end of the log. 

#log.retention.ms=600000 

# The minimum age of a log file to be eligible for deletion 
log.retention.hours=168 

# A size-based retention policy for logs. Segments are pruned from the log as long as the
remaining 
# segments don't drop below log.retention.bytes. 
#log.retention.bytes=1073741824 

# The maximum size of a log segment file. When this size is reached a new log segment will
be created. 
log.segment.bytes=536870912 
# log.segment.bytes=2147483648 

# The interval at which log segments are checked to see if they can be deleted according 
# to the retention policies 
#log.retention.check.interval.ms=60000 

############################# Zookeeper ############################# 

# Zookeeper connection string (see zookeeper docs for details). 
# This is a comma separated host:port pairs, each corresponding to a zk 
# server. e.g. "127.0.0.1:3000,127.0.0.1:3001,127.0.0.1:3002". 
# You can also append an optional chroot string to the urls to specify the 
# root directory for all kafka znodes. 


# Timeout in ms for connecting to zookeeper 
zookeeper.connection.timeout.ms=1000000 


# metrics reporter properties 
kafka.metrics.polling.interval.secs=5 
kafka.metrics.reporters=kafka.metrics.KafkaCSVMetricsReporter 
kafka.csv.metrics.dir=/tmp/kafka_metrics 
# Disable csv reporting by default. 
kafka.csv.metrics.reporter.enabled=true 



Thanks, 
maha 



From: "Bhavi C" <bhavi14@outlook.com> 
To: "users" <users@kafka.apache.org> 
Sent: Tuesday, September 19, 2017 6:11:05 PM 
Subject: Re: Question about Kafka 

What is the retention time on the topic you are publishing to? 

________________________________ 
From: MAHA ALSAYASNEH <maha.alsayasneh@univ-grenoble-alpes.fr> 
Sent: Tuesday, September 19, 2017 10:25:15 AM 
To: users@kafka.apache.org 
Subject: Question about Kafka 

Hello, 

I'm using Kafka 0.10.1.1 

I set up my cluster Kafka + zookeeper on three nodes (three brokers, one topic, 6 partitions,
3 replicas) 
When I send messages using Kafka producer (independent node), sometimes I get this error and
I couldn't figure out how to solve it. 

" org.apache.kafka.common.errors.TimeoutException: Expiring 61 record(s) for XXXX due to 30001
ms has passed since batch creation plus linger time " 


Could you please help. 

Thanks in advance 
Maha 

Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message