Hi,

You mentioned:

In general, is this optimization done for all columnar databases or file formats ?


Have you tried it using an ORC file? That is another columnar table/file.

Spark follows a rule based optimizer. It does not have a cost based optimizer yet! It is planned for future I believe


https://issues.apache.org/jira/browse/SPARK-16026

HTH



Dr Mich Talebzadeh

 

LinkedIn  https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw

 

http://talebzadehmich.wordpress.com


Disclaimer: Use it at your own risk. Any and all responsibility for any loss, damage or destruction of data or any other property which may arise from relying on this email's technical content is explicitly disclaimed. The author will in no case be liable for any monetary damages arising from such loss, damage or destruction.

 


On 1 August 2016 at 19:17, Sandeep Joshi <sanjos100@gmail.com> wrote:

Hi

I just want to confirm my understanding of the physical plan generated by Spark SQL while reading from a Parquet file.

When multiple predicates are pushed to the PrunedFilterScan, does Spark ensure that the Parquet file is not read multiple times while evaluating each predicate ?

In general, is this optimization done for all columnar databases or file formats ?

When I ran the following query in the spark-shell

> val nameDF = sqlContext.sql("SELECT name FROM parquetFile WHERE age = 50 AND name = 'someone'")

I saw that both the filters are pushed, but I can't seem to find where it applies them to the file data.

> nameDF.explain()

shows

Project [name#112]
+- Filter ((age#111L = 50) && (name#112 = someone))
   +- Scan ParquetRelation[name#112,age#111L] InputPaths: file:/home/spark/spark-1.6.1/people.parquet,
      PushedFilters: [EqualTo(age,50), EqualTo(name,someone)]