flink-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From tillrohrmann <...@git.apache.org>
Subject [GitHub] flink pull request #6251: [FLINK-9693] Set Execution#taskRestore to null aft...
Date Wed, 04 Jul 2018 09:11:11 GMT
GitHub user tillrohrmann opened a pull request:


    [FLINK-9693] Set Execution#taskRestore to null after deployment

    ## What is the purpose of the change
    Setting the assigned Execution#taskRestore to null after the deployment allows the
    JobManagerTaskRestore instance to be garbage collected. Furthermore, it won't be
    archived along with the Execution in the ExecutionVertex in case of a restart. This
    is especially important when setting state.backend.fs.memory-threshold to larger
    values because every state below this threshold will be stored in the meta state files
    and, thus, also the JobManagerTaskRestore instances.
    ## Verifying this change
    - Added `ExecutionTest#testTaskRestoreStateIsNulledAfterDeployment`
    ## Does this pull request potentially affect one of the following parts:
      - Dependencies (does it add or upgrade a dependency): (no)
      - The public API, i.e., is any changed class annotated with `@Public(Evolving)`: (no)
      - The serializers: (no)
      - The runtime per-record code paths (performance sensitive): (no)
      - Anything that affects deployment or recovery: JobManager (and its components), Checkpointing,
Yarn/Mesos, ZooKeeper: (yes)
      - The S3 file system connector: (no)
    ## Documentation
      - Does this pull request introduce a new feature? (no)
      - If yes, how is the feature documented? (not applicable)

You can merge this pull request into a Git repository by running:

    $ git pull https://github.com/tillrohrmann/flink fixMemoryLeakInJobManager

Alternatively you can review and apply these changes as the patch at:


To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

    This closes #6251



View raw message