spark-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Matei Zaharia <matei.zaha...@gmail.com>
Subject Re: Task output before a shuffle
Date Tue, 29 Oct 2013 01:47:17 GMT
Hi Ufuk,

Yes, we still write out data after these tasks in Spark 0.8, and it needs to be written out
before any stage that reads it can start. The main reason is simplicity when there are faults,
as well as more flexible scheduling (you don't have to decide where each reduce task is in
advance, you can have more reduce tasks than you have CPU cores, etc).

Matei

On Oct 28, 2013, at 9:25 AM, Ufuk Celebi <u.celebi@fu-berlin.de> wrote:

> Hey everybody,
> 
> I just watched the Spark Internals presentation [1] from the December 2012 dev meetup
and have a couple of questions regarding the output of tasks before a shuffle.
> 
> 1. Can anybody confirm that the default is still to persist stage output to RAM/disk
and then have the following tasks pull it (see [1] around 45:40)? I guess a couple of things
have changed since last year. I just want to be sure that this is not one of those things.
;-)
> 
> 2. Is it possible to switch to a "push" model between stages instead of having the following
tasks "pull" the result? I guess this is equivalent to the question whether it is possible
to turn persisting results off.
> 
> 3. Does the data need to be fully persisted before the next stage can start? Or will
the following task start pulling data before everything is written out?
> 
> 4. Is the main motivation for persisting to have faster recovery times on failures (e.g.
checkpointing)?
> 
> Best wishes,
> 
> Ufuk
> 
> [1] http://www.youtube.com/watch?v=49Hr5xZyTEA


Mime
View raw message