spark-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Henry Robinson (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (SPARK-24374) SPIP: Support Barrier Scheduling in Apache Spark
Date Fri, 01 Jun 2018 19:46:00 GMT

    [ https://issues.apache.org/jira/browse/SPARK-24374?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16498465#comment-16498465
] 

Henry Robinson commented on SPARK-24374:
----------------------------------------

The use case in the SPIP isn't 100% convincing. I'm concerned about the idea of embedding
other execution engines inside Spark tasks, effectively using Spark's job scheduler as a cluster
manager. Resource consumption would be very different at different phases of the specified
task (some tasks would be waiting, one task would have launched MPI) making it hard for sensible
allocation decisions to be made. Do you anticipate requiring support from the cluster manager
for gang scheduling? Or is {{barrier()}} going to be enough to ensure that all tasks wait
- even if they take up an executor slot on the cluster for a long time waiting for the other
tasks to get scheduled?

An alternative design for the example in the SPIP document would be to split it into more
than one Spark job, e.g.:

# (Spark job) write the input files in parallel
# (not via Spark) launch the MPI job via the cluster manager (perhaps a kubernetes pod, for
example)
# (Spark job) consume the output files in parallel

Spark already has semantics that allow you to wait until a job is finished so the driver can
naturally coordinate amongst the different phases without needing to add extra coordination
primitives to individual tasks. But synchronous execution is a thing in distributed systems,
of course, so maybe there are more compelling use cases than the one in the SPIP?  

> SPIP: Support Barrier Scheduling in Apache Spark
> ------------------------------------------------
>
>                 Key: SPARK-24374
>                 URL: https://issues.apache.org/jira/browse/SPARK-24374
>             Project: Spark
>          Issue Type: Epic
>          Components: ML, Spark Core
>    Affects Versions: 3.0.0
>            Reporter: Xiangrui Meng
>            Assignee: Xiangrui Meng
>            Priority: Major
>              Labels: SPIP
>         Attachments: SPIP_ Support Barrier Scheduling in Apache Spark.pdf
>
>
> (See details in the linked/attached SPIP doc.)
> {quote}
> The proposal here is to add a new scheduling model to Apache Spark so users can properly
embed distributed DL training as a Spark stage to simplify the distributed training workflow.
For example, Horovod uses MPI to implement all-reduce to accelerate distributed TensorFlow
training. The computation model is different from MapReduce used by Spark. In Spark, a task
in a stage doesn’t depend on any other tasks in the same stage, and hence it can be scheduled
independently. In MPI, all workers start at the same time and pass messages around. To embed
this workload in Spark, we need to introduce a new scheduling model, tentatively named “barrier
scheduling”, which launches tasks at the same time and provides users enough information
and tooling to embed distributed DL training. Spark can also provide an extra layer of fault
tolerance in case some tasks failed in the middle, where Spark would abort all tasks and restart
the stage.
> {quote}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org


Mime
View raw message