spark-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From kaklakariada <christoph.pi...@gmail.com>
Subject groupByKey() and keys with many values
Date Mon, 07 Sep 2015 08:02:18 GMT
Hi,

I already posted this question on the users mailing list
(http://apache-spark-user-list.1001560.n3.nabble.com/Using-groupByKey-with-many-values-per-key-td24538.html)
but did not get a reply. Maybe this is the correct forum to ask.

My problem is, that doing groupByKey().mapToPair() loads all values for a
key into memory which is a problem when the values don't fit into memory.
This was not a problem with Hadoop map/reduce, as the Iterable passed to the
reducer read from disk.

In Spark, the Iterable passed to mapToPair() is backed by a CompactBuffer
containing all values.

Is it possible to change this behavior without modifying Spark, or is there
a plan to change this?

Thank you very much for your help!
Christoph.



--
View this message in context: http://apache-spark-developers-list.1001551.n3.nabble.com/groupByKey-and-keys-with-many-values-tp13985.html
Sent from the Apache Spark Developers List mailing list archive at Nabble.com.

---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@spark.apache.org
For additional commands, e-mail: dev-help@spark.apache.org


Mime
View raw message