flink-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Yu Li (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (FLINK-11972) The classpath is missing the `flink-shaded-hadoop2-uber-2.8.3-1.8.0.jar` JAR during the end-to-end test.
Date Wed, 20 Mar 2019 20:02:00 GMT

    [ https://issues.apache.org/jira/browse/FLINK-11972?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16797504#comment-16797504

Yu Li commented on FLINK-11972:

There's another necessity to run the {{test_streaming_bucketing.sh}} case, that we must make
sure to run \{{mvn install}} in the flink-end-to-end-tests directory, below are more detailed

In {{test_streaming_bucketing.sh}} the command to submit job and get job id is like:
JOB_ID=$($FLINK_DIR/bin/flink run -d -p 4 $TEST_PROGRAM_JAR -outputPath $TEST_DATA_DIR/out/result
  | grep "Job has been submitted with JobID" | sed 's/.* //g')
And the {{TEST_PROGRAM_JAR}} need to be generated by install and won't be there by default.
In this case, the result of the job submission command will be something like:
Could not build the program from JAR file.

Use the help option (-h or --help) to get help on the command.
Thus the grep will get nothing, so job id is empty and the script will hang at the {{wait_job_running}}
phase, with some log like:
Job () is running.
Waiting for job () to have at least 5 completed checkpoints ...

Will add the notice to document, and also try to improve the script to do logging and fast
fail if the target jar is missing.

> The classpath is missing the `flink-shaded-hadoop2-uber-2.8.3-1.8.0.jar` JAR during the
end-to-end test.
> --------------------------------------------------------------------------------------------------------
>                 Key: FLINK-11972
>                 URL: https://issues.apache.org/jira/browse/FLINK-11972
>             Project: Flink
>          Issue Type: Bug
>          Components: Tests
>    Affects Versions: 1.8.0, 1.9.0
>            Reporter: sunjincheng
>            Priority: Major
> Since the difference between 1.8.0 and 1.7.x is that 1.8.x does not put the `hadoop-shaded`
JAR integrated into the dist.  It will cause an error when the end-to-end test cannot be
found with `Hadoop` Related classes,  such as: `java.lang.NoClassDefFoundError: Lorg/apache/hadoop/fs/FileSystem`.
So we need to improve the end-to-end test script, or explicitly stated in the README, i.e.
end-to-end test need to add `flink-shaded-hadoop2-uber-XXXX.jar` to the classpath. So, we
will get the exception something like:
> {code:java}
> [INFO] 3 instance(s) of taskexecutor are already running on jinchengsunjcs-iMac.local.
> Starting taskexecutor daemon on host jinchengsunjcs-iMac.local.
> java.lang.NoClassDefFoundError: Lorg/apache/hadoop/fs/FileSystem;
> at java.lang.Class.getDeclaredFields0(Native Method)
> at java.lang.Class.privateGetDeclaredFields(Class.java:2583)
> at java.lang.Class.getDeclaredFields(Class.java:1916)
> at org.apache.flink.api.java.ClosureCleaner.clean(ClosureCleaner.java:72)
> at org.apache.flink.streaming.api.environment.StreamExecutionEnvironment.clean(StreamExecutionEnvironment.java:1558)
> at org.apache.flink.streaming.api.datastream.DataStream.clean(DataStream.java:185)
> at org.apache.flink.streaming.api.datastream.DataStream.addSink(DataStream.java:1227)
> at org.apache.flink.streaming.tests.BucketingSinkTestProgram.main(BucketingSinkTestProgram.java:80)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at org.apache.flink.client.program.PackagedProgram.callMainMethod(PackagedProgram.java:529)
> at org.apache.flink.client.program.PackagedProgram.invokeInteractiveModeForExecution(PackagedProgram.java:421)
> at org.apache.flink.client.program.ClusterClient.run(ClusterClient.java:423)
> at org.apache.flink.client.cli.CliFrontend.executeProgram(CliFrontend.java:813)
> at org.apache.flink.client.cli.CliFrontend.runProgram(CliFrontend.java:287)
> at org.apache.flink.client.cli.CliFrontend.run(CliFrontend.java:213)
> at org.apache.flink.client.cli.CliFrontend.parseParameters(CliFrontend.java:1050)
> at org.apache.flink.client.cli.CliFrontend.lambda$main$11(CliFrontend.java:1126)
> at org.apache.flink.runtime.security.NoOpSecurityContext.runSecured(NoOpSecurityContext.java:30)
> at org.apache.flink.client.cli.CliFrontend.main(CliFrontend.java:1126)
> Caused by: java.lang.ClassNotFoundException: org.apache.hadoop.fs.FileSystem
> at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
> at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
> at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
> ... 22 more
> Job () is running.{code}
> So, I think we can import the test script or improve the README.
> What do you think?

This message was sent by Atlassian JIRA

View raw message