flink-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "ASF GitHub Bot (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (FLINK-3541) Clean up workaround in FlinkKafkaConsumer09
Date Tue, 05 Apr 2016 02:27:25 GMT

    [ https://issues.apache.org/jira/browse/FLINK-3541?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15225543#comment-15225543

ASF GitHub Bot commented on FLINK-3541:

Github user skyahead commented on the pull request:

    @StephanEwen I changed Kafka version from to and verified that NullPointerException
does get caught, and the code retries connecting for 10 times. 
    Using however, the NullPointerException does not happen anymore, whereas a TimeoutException
is thrown as expected and got caught expected too.
    All test cases in Kafka08ITCase (19 cases) and Kafka09ITCase (15 cases) pass in my local

> Clean up workaround in FlinkKafkaConsumer09 
> --------------------------------------------
>                 Key: FLINK-3541
>                 URL: https://issues.apache.org/jira/browse/FLINK-3541
>             Project: Flink
>          Issue Type: Improvement
>          Components: Kafka Connector
>    Affects Versions: 1.0.0
>            Reporter: Till Rohrmann
>            Priority: Minor
> In the current {{FlinkKafkaConsumer09}} implementation, we repeatedly start a new {{KafkaConsumer}}
if the method {{KafkaConsumer.partitionsFor}} returns a NPE. This is due to a bug with the
Kafka version See https://issues.apache.org/jira/browse/KAFKA-2880. The code can
be found in the constructor of {{FlinkKafkaConsumer09.java:208}}.
> However, the problem is marked as fixed for version, which we also use for the
flink-connector-kafka. Therefore, we should be able to get rid of the workaround.

This message was sent by Atlassian JIRA

View raw message