spark-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From <>
Subject RE: What does "Spark is not just MapReduce" mean? Isn't every Spark job a form of MapReduce?
Date Mon, 29 Jun 2015 12:58:20 GMT

Any {fan-out -> process in parallel -> fan-in -> aggregate} pattern of data flow
can be conceptually Map-Reduce(MR, as it is done in Hadoop).

Apart from the bigger list of map, reduce, sort, filter, pipe, join, combine,... functions,
that are many times more efficient and productive for developers, it is how Spark does it
that is different.
Ex: RDDs enable high availability of Data with only a single copy. HDFS needs multiple copies
causing a lot of I/O resource usage.


From: Ashic Mahtab []
Sent: 28 June 2015 22:21
To: YaoPau; Apache Spark
Subject: RE: What does "Spark is not just MapReduce" mean? Isn't every Spark job a form of

Spark comes with quite a few components. At it's core is..surprise....spark core. This provides
the core things required to run spark jobs. Spark provides a lot of operators out of the box...take
a look at

While all of them can be implemented with variations of, there are optimisations
to be gained in terms of data locality, etc., and the additional operators simply make life

In addition to the core stuff, spark also brings things like Spark Streaming, Spark Sql and
data frames, MLLib, GraphX, etc. Spark Streaming gives you microbatches of rdds at periodic
intervals.Think "give me the last 15 seconds of events every 5 seconds". You can then program
towards the small collection, and the job will run in a fault tolerant manner on your cluster.
Spark Sql provides hive like functionality that works nicely with various data sources, and
RDDs. MLLib provide a lot of oob machine learning algorithms, and the new Spark ML project
provides a nice elegant pipeline api to take care of a lot of common machine learning tasks.
GraphX allows you to represent data in graphs, and run graph algorithms on it. e.g. you can
represent your data as RDDs of vertexes and edges, and run pagerank on a distributed cluster.

And there's, yeah...Spark is definitely "not just" MapReduce. :)
> Date: Sun, 28 Jun 2015 09:13:18 -0700
> From:<>
> To:<>
> Subject: What does "Spark is not just MapReduce" mean? Isn't every Spark job a form of
> I've heard "Spark is not just MapReduce" mentioned during Spark talks, but it
> seems like every method that Spark has is really doing something like (Map
> -> Reduce) or (Map -> Map -> Map -> Reduce) etc behind the scenes, with the
> performance benefit of keeping RDDs in memory between stages.
> Am I wrong about that? Is Spark doing anything more efficiently than a
> series of Maps followed by a Reduce in memory? What methods does Spark have
> that can't easily be mapped (with somewhat similar efficiency) to Map and
> Reduce in memory?
> --
> View this message in context:
> Sent from the Apache Spark User List mailing list archive at
> ---------------------------------------------------------------------
> To unsubscribe, e-mail:<>
> For additional commands, e-mail:<>
The information contained in this electronic message and any attachments to this message are
intended for the exclusive use of the addressee(s) and may contain proprietary, confidential
or privileged information. If you are not the intended recipient, you should not disseminate,
distribute or copy this e-mail. Please notify the sender immediately and destroy all copies
of this message and any attachments. WARNING: Computer viruses can be transmitted via email.
The recipient should check this email and any attachments for the presence of viruses. The
company accepts no liability for any damage caused by any virus transmitted by this email.

View raw message