spark-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From 瀬川 卓也 <takuya_seg...@dwango.co.jp>
Subject Data are partial to a specific partition after sort
Date Thu, 29 Jan 2015 02:57:04 GMT
For example, We consider the word count of the long text data (100GB order).  There is clearly
a bias for the word , has been expected to be a long tail data do word count. Probably word
number 1 occupies about  over 1 / 10.

word count code

```
val allWordLineSplited: RDD[String] = // create RDD …
val wordRDD = allWordLineSplited.flatMap(_.split(“ “))
val count = wordRDD.map((_, 1)).reduceByKey(_ + _)
val sort = count.map{case (word, count) => (count, word)}.sortByKey()

sort.saveAsText(“/path/to/save/dir/“, classOf[GZipCodec])
```

It is considered that by focusing on the sort. this case, All data of word number 1 gather
in a partition. and Shuffle Read become very large size. After the other, and waiting Executor
has lost…

Although I want to equalize the size of the post if possible sort. 
If the technique that does not exist, how will the better to calculate?

Regards, 
Takuya
---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscribe@spark.apache.org
For additional commands, e-mail: user-help@spark.apache.org


Mime
View raw message