spark-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Shixiong Zhu (JIRA)" <>
Subject [jira] [Commented] (SPARK-4644) Implement skewed join
Date Wed, 03 Dec 2014 03:34:12 GMT


Shixiong Zhu commented on SPARK-4644:

Looks `groupByKey` is really different from `join`. The signature of `groupByKey` is `def
groupByKey(partitioner: Partitioner): RDD[(K, Iterable[V])]`, the return value is `RDD[(K,
Iterable[V])]`. It exposes the internal data structure as `Iterable` to the user, and user
can write `rdd.groupByKey().repartition(5)`. Therefore, `Iterable` returned by `groupByKey`
needs to be `Serializable` and can be used in other nodes.

`ChunkBuffer` I designed for skewed join is only used internally and won't be exposed to the
user. So now it's not `Serializable` and cannot be used by `groupByKey`.

In summary, we need a special `Iterable` for `groupByKey`: it can write to disk if there is
in insufficient space; it can be used in any node, which means this `Iterable` can access
other nodes' disk (maybe via BlockManager?). Therefore, for now I cannot find a general approach
both for `join` and `groupByKey`.

> Implement skewed join
> ---------------------
>                 Key: SPARK-4644
>                 URL:
>             Project: Spark
>          Issue Type: Improvement
>          Components: Spark Core
>            Reporter: Shixiong Zhu
>         Attachments: Skewed Join Design Doc.pdf
> Skewed data is not rare. For example, a book recommendation site may have several books
which are liked by most of the users. Running ALS on such skewed data will raise a OutOfMemory
error, if some book has too many users which cannot be fit into memory. To solve it, we propose
a skewed join implementation.

This message was sent by Atlassian JIRA

To unsubscribe, e-mail:
For additional commands, e-mail:

View raw message