spark-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Sonal Goyal <sonalgoy...@gmail.com>
Subject Re: Which function in spark is used to combine two RDDs by keys
Date Thu, 13 Nov 2014 11:51:05 GMT
Check cogroup.

Best Regards,
Sonal
Founder, Nube Technologies <http://www.nubetech.co>

<http://in.linkedin.com/in/sonalgoyal>



On Thu, Nov 13, 2014 at 5:11 PM, Blind Faith <person.of.book@gmail.com>
wrote:

> Let us say I have the following two RDDs, with the following key-pair
> values.
>
>     rdd1 = [ (key1, [value1, value2]), (key2, [value3, value4]) ]
>
> and
>
>     rdd2 = [ (key1, [value5, value6]), (key2, [value7]) ]
>
> Now, I want to join them by key values, so for example I want to return
> the following
>
>     ret = [ (key1, [value1, value2, value5, value6]), (key2, [value3,
> value4, value7]) ]
>
> How I can I do this, in spark using python or scala? One way is to use
> join, but join would create a tuple inside the tuple. But I want to only
> have one tuple per key value pair.
>

Mime
View raw message