kafka-users mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Ritesh Sinha <kumarriteshranjansi...@gmail.com>
Subject Re: Error while sending data to kafka producer
Date Thu, 10 Dec 2015 05:28:18 GMT
After editing my server.properties. I didn't start kafka again.That was
causing the issue.Silly mistake. Thanks a lot Ben for your replies.

On Thu, Dec 10, 2015 at 2:27 AM, Ben Stopford <ben@confluent.io> wrote:

> Hi Ritesh
>
> You config on both sides looks fine. There may be something wrong with
> your truststore, although you should see exceptions in either the client or
> server log files if that is the case.
>
> As you appear to be running locally, try creating the JKS files using the
> shell script included here (
> http://docs.confluent.io/2.0.0/kafka/ssl.html#signing-the-certificate <
> http://docs.confluent.io/2.0.0/kafka/ssl.html#signing-the-certificate>)
> entering the password test1234 whenever prompted then use this JKS files in
> your client and broker config. If executed correctly this example should
> definitely work.
>
> Ben
>
>
> > On 9 Dec 2015, at 18:50, Ritesh Sinha <kumarriteshranjansinha@gmail.com>
> wrote:
> >
> > This is my server config.
> >
> > On Thu, Dec 10, 2015 at 12:19 AM, Ritesh Sinha <
> > kumarriteshranjansinha@gmail.com> wrote:
> >
> >> # Licensed to the Apache Software Foundation (ASF) under one or more
> >> # contributor license agreements.  See the NOTICE file distributed with
> >> # this work for additional information regarding copyright ownership.
> >> # The ASF licenses this file to You under the Apache License, Version
> 2.0
> >> # (the "License"); you may not use this file except in compliance with
> >> # the License.  You may obtain a copy of the License at
> >> #
> >> #    http://www.apache.org/licenses/LICENSE-2.0
> >> #
> >> # Unless required by applicable law or agreed to in writing, software
> >> # distributed under the License is distributed on an "AS IS" BASIS,
> >> # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
> implied.
> >> # See the License for the specific language governing permissions and
> >> # limitations under the License.
> >> # see kafka.server.KafkaConfig for additional details and defaults
> >>
> >> ############################# Server Basics
> #############################
> >>
> >> # The id of the broker. This must be set to a unique integer for each
> >> broker.
> >> broker.id=0
> >>
> >> ############################# Socket Server Settings
> >> #############################
> >>
> >> listeners=PLAINTEXT://:9092,SSL://localhost:9093
> >>
> >> # The port the socket server listens on
> >> #port=9092
> >>
> >> # Hostname the broker will bind to. If not set, the server will bind to
> >> all interfaces
> >> #host.name=localhost
> >>
> >> # Hostname the broker will advertise to producers and consumers. If not
> >> set, it uses the
> >> # value for "host.name" if configured.  Otherwise, it will use the
> value
> >> returned from
> >> # java.net.InetAddress.getCanonicalHostName().
> >> #advertised.host.name=<hostname routable by clients>
> >>
> >> # The port to publish to ZooKeeper for clients to use. If this is not
> set,
> >> # it will publish the same port that the broker binds to.
> >> #advertised.port=<port accessible by clients>
> >>
> >> # The number of threads handling network requests
> >> num.network.threads=3
> >>
> >> # The number of threads doing disk I/O
> >> num.io.threads=8
> >>
> >> # The send buffer (SO_SNDBUF) used by the socket server
> >> socket.send.buffer.bytes=102400
> >>
> >> # The receive buffer (SO_RCVBUF) used by the socket server
> >> socket.receive.buffer.bytes=102400
> >>
> >> # The maximum size of a request that the socket server will accept
> >> (protection against OOM)
> >> socket.request.max.bytes=104857600
> >>
> >>
> >> ############################# Log Basics #############################
> >>
> >> # A comma seperated list of directories under which to store log files
> >> log.dirs=/tmp/kafka-logs
> >>
> >> # The default number of log partitions per topic. More partitions allow
> >> greater
> >> # parallelism for consumption, but this will also result in more files
> >> across
> >> # the brokers.
> >> num.partitions=1
> >>
> >> # The number of threads per data directory to be used for log recovery
> at
> >> startup and flushing at shutdown.
> >> # This value is recommended to be increased for installations with data
> >> dirs located in RAID array.
> >> num.recovery.threads.per.data.dir=1
> >>
> >> ############################# Log Flush Policy
> >> #############################
> >>
> >> # Messages are immediately written to the filesystem but by default we
> >> only fsync() to sync
> >> # the OS cache lazily. The following configurations control the flush of
> >> data to disk.
> >> # There are a few important trade-offs here:
> >> #    1. Durability: Unflushed data may be lost if you are not using
> >> replication.
> >> #    2. Latency: Very large flush intervals may lead to latency spikes
> >> when the flush does occur as there will be a lot of data to flush.
> >> #    3. Throughput: The flush is generally the most expensive operation,
> >> and a small flush interval may lead to exceessive seeks.
> >> # The settings below allow one to configure the flush policy to flush
> data
> >> after a period of time or
> >> # every N messages (or both). This can be done globally and overridden
> on
> >> a per-topic basis.
> >>
> >> # The number of messages to accept before forcing a flush of data to
> disk
> >> #log.flush.interval.messages=10000
> >>
> >> # The maximum amount of time a message can sit in a log before we force
> a
> >> flush
> >> #log.flush.interval.ms=1000
> >>
> >> ############################# Log Retention Policy
> >> #############################
> >>
> >> # The following configurations control the disposal of log segments. The
> >> policy can
> >> # be set to delete segments after a period of time, or after a given
> size
> >> has accumulated.
> >> # A segment will be deleted whenever *either* of these criteria are met.
> >> Deletion always happens
> >> # from the end of the log.
> >>
> >> # The minimum age of a log file to be eligible for deletion
> >> log.retention.hours=168
> >>
> >> # A size-based retention policy for logs. Segments are pruned from the
> log
> >> as long as the remaining
> >> # segments don't drop below log.retention.bytes.
> >> #log.retention.bytes=1073741824
> >>
> >> # The maximum size of a log segment file. When this size is reached a
> new
> >> log segment will be created.
> >> log.segment.bytes=1073741824
> >>
> >> # The interval at which log segments are checked to see if they can be
> >> deleted according
> >> # to the retention policies
> >> log.retention.check.interval.ms=300000
> >>
> >> # By default the log cleaner is disabled and the log retention policy
> will
> >> default to just delete segments after their retention expires.
> >> # If log.cleaner.enable=true is set the cleaner will be enabled and
> >> individual logs can then be marked for log compaction.
> >> log.cleaner.enable=false
> >>
> >> ############################# Zookeeper #############################
> >>
> >> # Zookeeper connection string (see zookeeper docs for details).
> >> # This is a comma separated host:port pairs, each corresponding to a zk
> >> # server. e.g. "127.0.0.1:3000,127.0.0.1:3001,127.0.0.1:3002".
> >> # You can also append an optional chroot string to the urls to specify
> the
> >> # root directory for all kafka znodes.
> >> zookeeper.connect=localhost:2181
> >>
> >> # Timeout in ms for connecting to zookeeper
> >> zookeeper.connection.timeout.ms=6000
> >>
> >>
> >>
> >> ssl.keystore.location =
> >> /home/ritesh/software/kafka_2.10-0.9.0.0/kafka.server.keystore.jks
> >> ssl.keystore.password = test1234
> >> ssl.key.password = test1234
> >> ssl.truststore.location =
> >> /home/ritesh/software/kafka_2.10-0.9.0.0/kafka.server.truststore.jks
> >> ssl.truststore.password = test1234
> >>
> >>
>
>

Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message