beam-commits mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "ASF GitHub Bot (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (BEAM-610) Enable spark's checkpointing mechanism for driver-failure recovery in streaming
Date Wed, 31 Aug 2016 19:12:20 GMT

    [ https://issues.apache.org/jira/browse/BEAM-610?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15453099#comment-15453099
] 

ASF GitHub Bot commented on BEAM-610:
-------------------------------------

GitHub user amitsela opened a pull request:

    https://github.com/apache/incubator-beam/pull/909

    [BEAM-610] Enable spark's checkpointing mechanism for driver-failure recovery in streaming

    Be sure to do all of the following to help us incorporate your contribution
    quickly and easily:
    
     - [ ] Make sure the PR title is formatted like:
       `[BEAM-<Jira issue #>] Description of pull request`
     - [ ] Make sure tests pass via `mvn clean verify`. (Even better, enable
           Travis-CI on your fork and ensure the whole test matrix passes).
     - [ ] Replace `<Jira issue #>` in the title with the actual Jira issue
           number, if there is one.
     - [ ] If this contribution is large, please file an Apache
           [Individual Contributor License Agreement](https://www.apache.org/licenses/icla.txt).
    
    ---
    
    Support basic functionality with GroupByKey and ParDo.
    
    Added support for grouping operations.
    
    Added checkpointDir option, using it before execution.
    
    Support Accumulator recovery from checkpoint.
    
    Streaming tests should use JUnit's TemporaryFolder Rule for checkpoint directory.
    
    Support combine optimizations.
    
    Support durable sideInput via Broadcast.
    
    Branches in the pipeline are either Bounded or Unbounded and should be handles so.
    
    Handle flatten/union of Bouned/Unbounded RDD/DStream.
    
    JavaDoc
    
    Rebased on master.
    
    Reuse functionality between batch and streaming translators
    
    Better implementation of streaming/batch pipeline-branch translation.
    
    Move group/combine functions to their own wrapping class.

You can merge this pull request into a Git repository by running:

    $ git pull https://github.com/amitsela/incubator-beam BEAM-610

Alternatively you can review and apply these changes as the patch at:

    https://github.com/apache/incubator-beam/pull/909.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

    This closes #909
    
----
commit ec9cd0805c23afd792dcccf0f8fb268cdbb0e319
Author: Sela <ansela@paypal.com>
Date:   2016-08-25T20:49:01Z

    Refactor translation mechanism to support checkpointing of DStream.
    
    Support basic functionality with GroupByKey and ParDo.
    
    Added support for grouping operations.
    
    Added checkpointDir option, using it before execution.
    
    Support Accumulator recovery from checkpoint.
    
    Streaming tests should use JUnit's TemporaryFolder Rule for checkpoint directory.
    
    Support combine optimizations.
    
    Support durable sideInput via Broadcast.
    
    Branches in the pipeline are either Bounded or Unbounded and should be handles so.
    
    Handle flatten/union of Bouned/Unbounded RDD/DStream.
    
    JavaDoc
    
    Rebased on master.
    
    Reuse functionality between batch and streaming translators
    
    Better implementation of streaming/batch pipeline-branch translation.
    
    Move group/combine functions to their own wrapping class.

----


> Enable spark's checkpointing mechanism for driver-failure recovery in streaming
> -------------------------------------------------------------------------------
>
>                 Key: BEAM-610
>                 URL: https://issues.apache.org/jira/browse/BEAM-610
>             Project: Beam
>          Issue Type: Bug
>          Components: runner-spark
>            Reporter: Amit Sela
>            Assignee: Amit Sela
>
> For streaming applications, Spark provides a checkpoint mechanism useful for stateful
processing and driver failures. See: https://spark.apache.org/docs/1.6.2/streaming-programming-guide.html#checkpointing
> This requires the "lambdas", or the content of DStream/RDD functions to be Serializable
- currently, the runner a lot of the translation work in streaming to the batch translator,
which can no longer be the case because it passes along non-serializables.
> This also requires wrapping the creation of the streaming application's graph in a "getOrCreate"
manner. See: https://spark.apache.org/docs/1.6.2/streaming-programming-guide.html#how-to-configure-checkpointing
> Another limitation is the need to wrap Accumulators and Broadcast variables in Singletons
in order for them to be re-created once stale after recovery.
> This work is a prerequisite to support PerKey workflows, which will be support via Spark's
stateful operators such as mapWithState.   



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Mime
View raw message