I believe this is a regression. Does not work for me either. There is a Jira on parquet wildcards which is resolved, I'll see about getting it reopened


Sent on the new Sprint Network from my Samsung Galaxy S®4.


-------- Original message --------
From: Vaxuki
Date:05/07/2015 7:38 AM (GMT-05:00)
To: Olivier Girardot
Cc: user@spark.apache.org
Subject: Re: Spark 1.3.1 and Parquet Partitions

Olivier 
Nope. Wildcard extensions don't work I am debugging the code to figure out what's wrong I know I am using 1.3.1 for sure

Pardon typos...

On May 7, 2015, at 7:06 AM, Olivier Girardot <ssaboum@gmail.com> wrote:

"hdfs://some ip:8029/dataset/*/*.parquet" doesn't work for you ?

Le jeu. 7 mai 2015 à 03:32, vasuki <vaxuki@gmail.com> a écrit :
Spark 1.3.1 -
i have a parquet file on hdfs partitioned by some string looking like this
/dataset/city=London/data.parquet
/dataset/city=NewYork/data.parquet
/dataset/city=Paris/data.paruqet
….

I am trying to get to load it using sqlContext using sqlcontext.parquetFile(
"hdfs://some ip:8029/dataset/< what do i put here >

No leads so far. is there i can load the partitions ? I am running on
cluster and not local..
-V



--
View this message in context: http://apache-spark-user-list.1001560.n3.nabble.com/Spark-1-3-1-and-Parquet-Partitions-tp22792.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.

---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscribe@spark.apache.org
For additional commands, e-mail: user-help@spark.apache.org