flink-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "sunjincheng (JIRA)" <j...@apache.org>
Subject [jira] [Comment Edited] (FLINK-12602) Correct the flink pom `artifactId` config and scala-free check logic
Date Mon, 27 May 2019 11:08:00 GMT

    [ https://issues.apache.org/jira/browse/FLINK-12602?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16848836#comment-16848836
] 

sunjincheng edited comment on FLINK-12602 at 5/27/19 11:07 AM:
---------------------------------------------------------------

Sounds good [~Zentol]!

I have a few updates on the handling of the connector, we have two ways to deal with it:
 # Improve the script to check any (compile) dependencies with a scala-suffix(which we mentioned
above).
 # Add the `flink-streaming-java` dependency wich  `provided`  scope for the corresponding
connectors.

For approach 1, we should add check logic:
 * The command of `dependency:tree` should add an option: ` -Dincludes=org.apache.flink:*_2.1*:: 
` such as `org.apache.flink:flink-streaming-java_2.11`.
 * The command of `grep ` also need to add the logic: `-E "org.scala-lang|- org.apache.flink:[^:]+_2\.1[0-9]"`
, also for test `grep --invert-match "org.apache.flink:[^:]*_2\.1[0-9]:.*:.*:test"`.

For approach 2, we should add the dependency of `link-streaming-java` for `flink-sql-connector-elasticsearch6
flink-sql-connector-kafka flink-sql-connector-kafka-0.10 flink-sql-connector-kafka-0.11 flink-sql-connector-kafka-0.9`. 
And I have prepared the changs [here|https://github.com/sunjincheng121/flink/pull/96]

For now, I think the change logic in approach 1 is a bit complex(a lot of filtering processing
logic). So, I prefer the approach 1, due to even we add some new module in the future we
should well know whether we should add the scala-suffix for the `artifactId`, then manually
add dependencies on for the scala in the pom. 

What do you think?


was (Author: sunjincheng121):
Sounds good [~Zentol]!

I have a few updates on the handling of the connector, we have two ways to deal with it:
 # Improve the script to check any (compile) dependencies with a scala-suffix(which we mentioned
above).
 # Add the `flink-streaming-java` dependency wich  `provided`  scope for the corresponding
connectors.

For approach 1, we should add check logic:
 * The command of `dependency:tree` should add an option: ` -Dincludes=org.apache.flink:*_2.1*:: 
` such as `org.apache.flink:flink-streaming-java_2.11`.
 * The command of `grep ` also need to add the logic: `-E "org.scala-lang|- org.apache.flink:[^:]+_2\.1[0-9]"`
, also for test `grep --invert-match "org.apache.flink:[^:]*_2\.1[0-9]:.*:.*:test"`.

For approach 2, we should add the dependency of `link-streaming-java` for `flink-sql-connector-elasticsearch6
flink-sql-connector-kafka flink-sql-connector-kafka-0.10 flink-sql-connector-kafka-0.11 flink-sql-connector-kafka-0.9`. 
And I have prepared the changs [here|[https://github.com/sunjincheng121/flink/pull/96]]

For now, I think the change logic in approach 1 is a bit complex(a lot of filtering processing
logic). So, I prefer the approach 1, due to even we add some new module in the future we
should well know whether we should add the scala-suffix for the `artifactId`, then manually
add dependencies on for the scala in the pom. 

What do you think?

> Correct the flink pom `artifactId` config and scala-free check logic
> --------------------------------------------------------------------
>
>                 Key: FLINK-12602
>                 URL: https://issues.apache.org/jira/browse/FLINK-12602
>             Project: Flink
>          Issue Type: Bug
>          Components: Build System
>    Affects Versions: 1.9.0
>            Reporter: sunjincheng
>            Assignee: sunjincheng
>            Priority: Major
>
> I find a shell issue in `verify_scala_suffixes.sh`(line 145) as follows:
> {code}
> grep "${module}_\d\+\.\d\+</artifactId>" "{}"
> {code}
> This code want to find out all modules that the module's `artifactId`  with a `scala_binary_version`
suffix. 
> but the problem is our all `artifactId` value is in the pattern of `XXX_${scala.binary.version}`,
such as:
> {code}
> <artifactId>flink-tests_${scala.binary.version}</artifactId>
> {code}
> then the result always empty, so this check did not take effect.
> I have already initiated a discussion of the issue. Please check the Mail thread here
for details.
> http://mail-archives.apache.org/mod_mbox/flink-dev/201905.mbox/%3CCAJSjTKw+8McSC0FvNeyaOVL_TTrr_UUOsX-TFGxj5GfQp1AUtQ@mail.gmail.com%3E



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

Mime
View raw message