flink-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Timo Walther (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (FLINK-10107) SQL Client end-to-end test fails for releases
Date Thu, 09 Aug 2018 13:37:00 GMT

    [ https://issues.apache.org/jira/browse/FLINK-10107?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16574846#comment-16574846
] 

Timo Walther commented on FLINK-10107:
--------------------------------------

A temporary fix has been merged which avoids dependency conflicts in the test.

Fixed in 1.7.0: 764da8183b6dfd0fe00eebe96fb619ca8d096047
Fixed in 1.6.1: da23c5d3c1e921cd7d2a88b7c0892f17e5d7276f

> SQL Client end-to-end test fails for releases
> ---------------------------------------------
>
>                 Key: FLINK-10107
>                 URL: https://issues.apache.org/jira/browse/FLINK-10107
>             Project: Flink
>          Issue Type: Bug
>          Components: Table API &amp; SQL
>            Reporter: Timo Walther
>            Assignee: Timo Walther
>            Priority: Major
>              Labels: pull-request-available
>
> It seems that SQL JARs for Kafka 0.10 and Kafka 0.9 have conflicts that only occur for
releases and not SNAPSHOT builds. This might be due to their file name. Depending on the file
name either 0.9 is loaded before 0.10 and vice versa.
> One of the following errors occured:
> {code}
> 2018-08-08 18:28:51,636 ERROR org.apache.flink.kafka09.shaded.org.apache.kafka.clients.ClientUtils
 - Failed to close coordinator
> java.lang.NoClassDefFoundError: org/apache/flink/kafka09/shaded/org/apache/kafka/common/requests/OffsetCommitResponse
>     at org.apache.flink.kafka09.shaded.org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.sendOffsetCommitRequest(ConsumerCoordinator.java:473)
>     at org.apache.flink.kafka09.shaded.org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.commitOffsetsSync(ConsumerCoordinator.java:357)
>     at org.apache.flink.kafka09.shaded.org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.maybeAutoCommitOffsetsSync(ConsumerCoordinator.java:439)
>     at org.apache.flink.kafka09.shaded.org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.close(ConsumerCoordinator.java:319)
>     at org.apache.flink.kafka09.shaded.org.apache.kafka.clients.ClientUtils.closeQuietly(ClientUtils.java:63)
>     at org.apache.flink.kafka09.shaded.org.apache.kafka.clients.consumer.KafkaConsumer.close(KafkaConsumer.java:1277)
>     at org.apache.flink.kafka09.shaded.org.apache.kafka.clients.consumer.KafkaConsumer.close(KafkaConsumer.java:1258)
>     at org.apache.flink.streaming.connectors.kafka.internal.KafkaConsumerThread.run(KafkaConsumerThread.java:286)
> Caused by: java.lang.ClassNotFoundException: org.apache.flink.kafka09.shaded.org.apache.kafka.common.requests.OffsetCommitResponse
>     at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
>     at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
>     at org.apache.flink.runtime.execution.librarycache.FlinkUserCodeClassLoaders$ChildFirstClassLoader.loadClass(FlinkUserCodeClassLoaders.java:120)
>     at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
>     ... 8 more
> {code}
> {code}
> java.lang.NoSuchFieldError: producer
>     at org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer010.invoke(FlinkKafkaProducer010.java:369)
>     at org.apache.flink.streaming.api.operators.StreamSink.processElement(StreamSink.java:56)
>     at org.apache.flink.streaming.runtime.tasks.OperatorChain$CopyingChainingOutput.pushToOperator(OperatorChain.java:579)
>     at org.apache.flink.streaming.runtime.tasks.OperatorChain$CopyingChainingOutput.collect(OperatorChain.java:554)
>     at org.apache.flink.streaming.runtime.tasks.OperatorChain$CopyingChainingOutput.collect(OperatorChain.java:534)
>     at org.apache.flink.streaming.api.operators.AbstractStreamOperator$CountingOutput.collect(AbstractStreamOperator.java:689)
>     at org.apache.flink.streaming.api.operators.AbstractStreamOperator$CountingOutput.collect(AbstractStreamOperator.java:667)
>     at org.apache.flink.streaming.api.operators.StreamMap.processElement(StreamMap.java:41)
>     at org.apache.flink.streaming.runtime.tasks.OperatorChain$CopyingChainingOutput.pushToOperator(OperatorChain.java:579)
>     at org.apache.flink.streaming.runtime.tasks.OperatorChain$CopyingChainingOutput.collect(OperatorChain.java:554)
>     at org.apache.flink.streaming.runtime.tasks.OperatorChain$CopyingChainingOutput.collect(OperatorChain.java:534)
>     at org.apache.flink.streaming.api.operators.AbstractStreamOperator$CountingOutput.collect(AbstractStreamOperator.java:689)
>     at org.apache.flink.streaming.api.operators.AbstractStreamOperator$CountingOutput.collect(AbstractStreamOperator.java:667)
>     at org.apache.flink.streaming.api.operators.TimestampedCollector.collect(TimestampedCollector.java:51)
>     at org.apache.flink.table.runtime.CRowWrappingCollector.collect(CRowWrappingCollector.scala:37)
>     at org.apache.flink.table.runtime.CRowWrappingCollector.collect(CRowWrappingCollector.scala:28)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

Mime
View raw message