spark-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Nathan Kronenfeld <nkronenf...@oculusinfo.com>
Subject Re: Have different reduce key than mapper key
Date Wed, 23 Jul 2014 15:48:30 GMT
You do them sequentially in code; Spark will take care of combining them in
execution.
so something like:
foo.map(fcn to [K1, V1]).reduceByKey(fcn from (V1, V1) to V1).map(fcn from
(K1, V1) to (K2, V2))


On Wed, Jul 23, 2014 at 11:22 AM, soumick86 <sdasgupta@dstsystems.com>
wrote:

> How can I transform the mapper key at the reducer output. The functions I
> have encountered are combineByKey, reduceByKey, etc that work on the values
> and not on the key. For example below, this is what I want to achieve but
> seems like I can only have K1 and not K2:
>
> Mapper----->(K1,V1)----->Reducer----->(K2,V2)
>
> I must be missing something. Is there a class/method available? Also I am
> using the Java API
>
>
>
> --
> View this message in context:
> http://apache-spark-user-list.1001560.n3.nabble.com/Have-different-reduce-key-than-mapper-key-tp10503.html
> Sent from the Apache Spark User List mailing list archive at Nabble.com.
>



-- 
Nathan Kronenfeld
Senior Visualization Developer
Oculus Info Inc
2 Berkeley Street, Suite 600,
Toronto, Ontario M5A 4J5
Phone:  +1-416-203-3003 x 238
Email:  nkronenfeld@oculusinfo.com

Mime
View raw message