spark-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Yeikel <em...@yeikel.com>
Subject Re: What is the best way to take the top N entries from a hive table/data source?
Date Tue, 14 Apr 2020 18:45:15 GMT
Looking at the results of explain, I can see a CollectLimit step. Does that
work the same way as a regular .collect() ? (where all records are sent to
the driver?)


spark.sql("select * from db.table limit 1000000").explain(false)
== Physical Plan ==
CollectLimit 1000000
+- FileScan parquet ... 806 more fields] Batched: false, Format: Parquet,
Location: CatalogFileIndex[...], PartitionCount: 3, PartitionFilters: [],
PushedFilters: [], ReadSchema:.....
db: Unit = ()

The number of partitions is 1 so that makes sense. 

spark.sql("select * from db.table limit 1000000").rdd.partitions.size = 1

As a follow up , I tried to repartition the resultant dataframe and while I
can't see the CollectLimit step anymore , It did not make any difference in
the job. I still saw a big task at the end that ends up failing. 

spark.sql("select * from db.table limit
1000000").repartition(1000).explain(false)

Exchange RoundRobinPartitioning(1000)
+- GlobalLimit 1000000
   +- Exchange SinglePartition
      +- LocalLimit 1000000  -> Is this a collect?





--
Sent from: http://apache-spark-user-list.1001560.n3.nabble.com/

---------------------------------------------------------------------
To unsubscribe e-mail: user-unsubscribe@spark.apache.org


Mime
View raw message