spark-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From unk1102 <umesh.ka...@gmail.com>
Subject How to avoid empty unavoidable group by keys in DataFrame?
Date Sat, 21 May 2016 09:12:14 GMT
Hi I have Spark job which does group by and I cant avoid it because of my use
case. I have large dataset around 1 TB which I need to process/update in
DataFrame. Now my jobs shuffles huge data and slows things because of
shuffling and groupby. One reason I see is my data is skew some of my group
by keys are empty. How do I avoid empty group by keys in DataFrame? Does
DataFrame avoid empty group by key? I have around 8 keys on which I do group
by.

sourceFrame.select("blabla").groupby("col1","col2","col3",..."col8").agg("bla
bla"); 



--
View this message in context: http://apache-spark-user-list.1001560.n3.nabble.com/How-to-avoid-empty-unavoidable-group-by-keys-in-DataFrame-tp26992.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.

---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscribe@spark.apache.org
For additional commands, e-mail: user-help@spark.apache.org


Mime
View raw message