flink-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From GitBox <...@apache.org>
Subject [GitHub] [flink] zjffdu opened a new pull request #8533: Flink 12596
Date Fri, 24 May 2019 08:34:12 GMT
zjffdu opened a new pull request #8533: Flink 12596
URL: https://github.com/apache/flink/pull/8533
   ## What is the purpose of the change
   In FlinkShell, we treat different kinds of cluster mode, and use scala pattern matching
to handle that. 
   This is not necessary and make code unreadable. We can unify them into ClusterClient which
support all the cluster modes. 
   ## Brief change log
   1. Refactoring on `FlinkShell.scala`
      a. Replace `Option[Either[MiniCluster , ClusterClient[_]]]` with `ClusterClient[_]`
      b. Move ClusterClient into `FlinkILoop.scala` so that let `FlinkILoop` control the shutdown
of `ClusterClient` (shutdown ClusterClient in `FlinkILoop#closeInterpreter`)
      c. Introduce method `startShell(config: Config, in: Option[BufferedReader], out: JPrintWriter)`
to make production code and test code consistency. (Don't need to introduce `bufferedReader`
for unit testing)
   2. Refactoring on unit test code
      a. Refactoring method `ScalaShellITCase#processInShell` to make it readable. Now each
test will create a new FlinkShell instance which create a new `MiniCluster` by default. 
      b. Remove `ScalaShellLocalStartupITCase` as `ScalaShellITCase` already cover this case
after this PR.
   *(for example:)*
     - *The TaskInfo is stored in the blob store on job creation time as a persistent artifact*
     - *Deployments RPC transmits only the blob storage reference*
     - *TaskManagers retrieve the TaskInfo from the blob cache*
   ## Verifying this change
   *(Please pick either of the following options)*
   This change is a trivial rework / code cleanup without any test coverage.
   This change is already covered by existing tests, such as *(please describe tests)*.
   This change added tests and can be verified as follows:
     - *Added integration tests for end-to-end deployment with large payloads (100MB)*
     - *Extended integration test for recovery after master (JobManager) failure*
     - *Added test that validates that TaskInfo is transferred only once across recoveries*
     - *Manually verified the change by running a 4 node cluser with 2 JobManagers and 4 TaskManagers,
a stateful streaming program, and killing one JobManager and two TaskManagers during the execution,
verifying that recovery happens correctly.*
   ## Does this pull request potentially affect one of the following parts:
     - Dependencies (does it add or upgrade a dependency): (yes / no)
     - The public API, i.e., is any changed class annotated with `@Public(Evolving)`: (yes
/ no)
     - The serializers: (yes / no / don't know)
     - The runtime per-record code paths (performance sensitive): (yes / no / don't know)
     - Anything that affects deployment or recovery: JobManager (and its components), Checkpointing,
Yarn/Mesos, ZooKeeper: (yes / no / don't know)
     - The S3 file system connector: (yes / no / don't know)
   ## Documentation
     - Does this pull request introduce a new feature? (yes / no)
     - If yes, how is the feature documented? (not applicable / docs / JavaDocs / not documented)

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:

With regards,
Apache Git Services

View raw message