flink-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "ASF GitHub Bot (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (FLINK-4035) Bump Kafka producer in Kafka sink to Kafka
Date Wed, 24 Aug 2016 09:32:20 GMT

    [ https://issues.apache.org/jira/browse/FLINK-4035?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15434589#comment-15434589

ASF GitHub Bot commented on FLINK-4035:

Github user rmetzger commented on the issue:

    @eliaslevy, I assume you are referring to https://issues.apache.org/jira/browse/FLINK-4050.

    Its good that you are mentioning the issue again, so I can move it a bit up on my TODO
    I would personally prefer to first add the Kafka 0.10 module in this pull request and
then resolve FLINK-4050 independently. I know that this might lead to a little bit of duplicate
work on the Kafka 0.10 code, but on the other hand its easier to discuss one issue at a time

> Bump Kafka producer in Kafka sink to Kafka
> ---------------------------------------------------
>                 Key: FLINK-4035
>                 URL: https://issues.apache.org/jira/browse/FLINK-4035
>             Project: Flink
>          Issue Type: Bug
>          Components: Kafka Connector
>    Affects Versions: 1.0.3
>            Reporter: Elias Levy
>            Assignee: Robert Metzger
>            Priority: Minor
> Kafka introduced protocol changes related to the producer.  Published messages
now include timestamps and compressed messages now include relative offsets.  As it is now,
brokers must decompress publisher compressed messages, assign offset to them, and recompress
them, which is wasteful and makes it less likely that compression will be used at all.

This message was sent by Atlassian JIRA

View raw message