flink-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "ASF GitHub Bot (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (FLINK-3541) Clean up workaround in FlinkKafkaConsumer09
Date Mon, 04 Apr 2016 12:13:25 GMT

    [ https://issues.apache.org/jira/browse/FLINK-3541?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15224001#comment-15224001
] 

ASF GitHub Bot commented on FLINK-3541:
---------------------------------------

Github user rmetzger commented on the pull request:

    https://github.com/apache/flink/pull/1846#issuecomment-205270862
  
    Apparently https://issues.apache.org/jira/browse/KAFKA-2880 has been fixed with Kafka
0.9.0.1. Still, we should make sure that the multiple (I'd say at least 10) test runs are
passing without failure.
    
    I'll look into this once its building.


> Clean up workaround in FlinkKafkaConsumer09 
> --------------------------------------------
>
>                 Key: FLINK-3541
>                 URL: https://issues.apache.org/jira/browse/FLINK-3541
>             Project: Flink
>          Issue Type: Improvement
>          Components: Kafka Connector
>    Affects Versions: 1.0.0
>            Reporter: Till Rohrmann
>            Priority: Minor
>
> In the current {{FlinkKafkaConsumer09}} implementation, we repeatedly start a new {{KafkaConsumer}}
if the method {{KafkaConsumer.partitionsFor}} returns a NPE. This is due to a bug with the
Kafka version 0.9.0.0. See https://issues.apache.org/jira/browse/KAFKA-2880. The code can
be found in the constructor of {{FlinkKafkaConsumer09.java:208}}.
> However, the problem is marked as fixed for version 0.9.0.1, which we also use for the
flink-connector-kafka. Therefore, we should be able to get rid of the workaround.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Mime
View raw message