kafka-users mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Manu Zhang <owenzhang1...@gmail.com>
Subject Re: Broker Exception: Attempt to read with a maximum offset less than start offset
Date Thu, 21 Jan 2016 01:28:58 GMT
Hi,

Any suggestions for this issue or do I need to provide more information ?
Any links I can refer to would be also very helpful.

Thanks,
Manu Zhang


On Tue, Jan 19, 2016 at 8:41 PM, Manu Zhang <owenzhang1990@gmail.com> wrote:

> Hi all,
>
> Is KAFKA-725 Broker Exception: Attempt to read with a maximum offset less
> than start offset <https://issues.apache.org/jira/browse/KAFKA-725> still
> valid ? We are seeing a similar issue when we are carrying out the yahoo's
> streaming-benchmarks <https://github.com/yahoo/streaming-benchmarks> on a
> 4-node cluster. Our issue id is
> https://github.com/gearpump/gearpump/issues/1872.
>
> We are using Kafka scala-2.10-0.8.2.1. 4 brokers are installed on 4 nodes
> with Zookeeper on 3 of them. On each node, 4 producers produce data to a
> Kafka topic with 4 partitions and 1 replica. Each producer has a throughput
> of 17K messages/s. 4 consumers are distributed (not necessarily evenly)
> across the cluster and consume from Kafka as fast as possible.
>
> I tried logging the produced offsets (with callback in send) and found
> that the "start offset" already existed when the consumer failed with the
> fetch exception.
>
> This happened only when producers are producing at high throughput.
>
> Any ideas would be much appreciated.
>
> Thanks,
> Manu Zhang
>

Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message