spark-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Davies Liu <dav...@databricks.com>
Subject Re: Which function in spark is used to combine two RDDs by keys
Date Thu, 13 Nov 2014 17:19:04 GMT
rdd1.union(rdd2).groupByKey()

On Thu, Nov 13, 2014 at 3:41 AM, Blind Faith <person.of.book@gmail.com> wrote:
> Let us say I have the following two RDDs, with the following key-pair
> values.
>
>     rdd1 = [ (key1, [value1, value2]), (key2, [value3, value4]) ]
>
> and
>
>     rdd2 = [ (key1, [value5, value6]), (key2, [value7]) ]
>
> Now, I want to join them by key values, so for example I want to return the
> following
>
>     ret = [ (key1, [value1, value2, value5, value6]), (key2, [value3,
> value4, value7]) ]
>
> How I can I do this, in spark using python or scala? One way is to use join,
> but join would create a tuple inside the tuple. But I want to only have one
> tuple per key value pair.

---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscribe@spark.apache.org
For additional commands, e-mail: user-help@spark.apache.org


Mime
View raw message