spark-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Wangda Tan (JIRA)" <>
Subject [jira] [Commented] (SPARK-24374) SPIP: Support Barrier Scheduling in Apache Spark
Date Sat, 02 Jun 2018 01:22:00 GMT


Wangda Tan commented on SPARK-24374:

Thanks [~mengxr] for filing the JIRA and ping from [~ywskycn].

My questions and $.02 about this:

1) This JIRA is trying to solve the gang scheduling problem of ML applications, however, gang
scheduling should be handled by underlying resource scheduler instead of Spark. Because Spark
with non-standalone deployment has no control of how to do resource allocation.

YARN, for example has reservation system: YARN-1051 to handle the gang scheduling problem.
Here's related paper: [].

If the proposed API is like to implement gang-scheduling by using gather-and-hold pattern,
existing Spark API should be good enough – just to request resources until it reaches target
#containers. Application needs to wait in both cases.

2) For ML application, the gang scheduling problem is just a tip of the iceberg: As I mentioned
above: the simplest gather-and-hold mode works in most cases. The hardest part is how to integrate
existing app to the new system, this is different from app to app. For example:
 - MPI needs launched processes to contact their master so master can launch slaves and make
them to interconnect to each other (phone-home). Application needs to implement logics to
talk to different RMs. (Ref: MAPREDUCE-2911 / [])
 - Tensorflow needs similar setup, we spent a lot of time recently to make distributed TF
run on top of YARN native services (Apache Hadoop 3.1.0+), which can support docker / Kerberorized
HDFS, etc. See YARN-8220.
 - Fault tolerance is even a harder problem.

3) One potential benefit I can think about embedding app to Spark is, applications could directly
read from memory of Spark tasks. But I'm not sure how many of existing ML apps support this.
Is there any other benefit? If we use Spark as a workflow orchestration engine, using workflow
management tools like Oozie or suggestion from [~henryr] should be good enough.

> SPIP: Support Barrier Scheduling in Apache Spark
> ------------------------------------------------
>                 Key: SPARK-24374
>                 URL:
>             Project: Spark
>          Issue Type: Epic
>          Components: ML, Spark Core
>    Affects Versions: 3.0.0
>            Reporter: Xiangrui Meng
>            Assignee: Xiangrui Meng
>            Priority: Major
>              Labels: SPIP
>         Attachments: SPIP_ Support Barrier Scheduling in Apache Spark.pdf
> (See details in the linked/attached SPIP doc.)
> {quote}
> The proposal here is to add a new scheduling model to Apache Spark so users can properly
embed distributed DL training as a Spark stage to simplify the distributed training workflow.
For example, Horovod uses MPI to implement all-reduce to accelerate distributed TensorFlow
training. The computation model is different from MapReduce used by Spark. In Spark, a task
in a stage doesn’t depend on any other tasks in the same stage, and hence it can be scheduled
independently. In MPI, all workers start at the same time and pass messages around. To embed
this workload in Spark, we need to introduce a new scheduling model, tentatively named “barrier
scheduling”, which launches tasks at the same time and provides users enough information
and tooling to embed distributed DL training. Spark can also provide an extra layer of fault
tolerance in case some tasks failed in the middle, where Spark would abort all tasks and restart
the stage.
> {quote}

This message was sent by Atlassian JIRA

To unsubscribe, e-mail:
For additional commands, e-mail:

View raw message