spark-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Exie <tfind...@prodevelop.com.au>
Subject Spark 1.3.0 -> 1.3.1 produces java.lang.NoSuchFieldError: NO_FILTER
Date Fri, 15 May 2015 05:21:07 GMT
Hello Bright Sparks,

I was using Spark 1.3.0 to push data out to Parquet files. They have been
working great, super fast, easy way to persist data frames etc.

However I just swapped out Spark 1.3.0 and picked up the tarball for 1.3.1.
I unzipped it, copied my config over and then went to read one of my parquet
files from the last release when I got this:
java.lang.NoSuchFieldError: NO_FILTER
	at
org.apache.spark.sql.parquet.ParquetRelation2$MetadataCache$$anonfun$refresh$6.apply(newParquet.scala:299)
	at
org.apache.spark.sql.parquet.ParquetRelation2$MetadataCache$$anonfun$refresh$6.apply(newParquet.scala:297)
	at scala.collection.parallel.mutable.ParArray$Map.leaf(ParArray.scala:658)
	at
scala.collection.parallel.Task$$anonfun$tryLeaf$1.apply$mcV$sp(Tasks.scala:54)
	at scala.collection.parallel.Task$$anonfun$tryLeaf$1.apply(Tasks.scala:53)
	at scala.collection.parallel.Task$$anonfun$tryLeaf$1.apply(Tasks.scala:53)
	at scala.collection.parallel.Task$class.tryLeaf(Tasks.scala:56)
	at
scala.collection.parallel.mutable.ParArray$Map.tryLeaf(ParArray.scala:650)

I did some googling, it appears there were some changes to the Parquet file
format.

I found a reference to an option:
sqlContext.setConf("spark.sql.parquet.useDataSourceApi", "false") 

Which I tried, but I got the same error (slightly different cause though).
java.lang.NoSuchFieldError: NO_FILTER
	at
org.apache.spark.sql.parquet.ParquetTypesConverter$$anonfun$readMetaData$3.apply(ParquetTypes.scala:494)
	at
org.apache.spark.sql.parquet.ParquetTypesConverter$$anonfun$readMetaData$3.apply(ParquetTypes.scala:494)
	at scala.Option.map(Option.scala:145)
	at
org.apache.spark.sql.parquet.ParquetTypesConverter$.readMetaData(ParquetTypes.scala:494)
	at
org.apache.spark.sql.parquet.ParquetTypesConverter$.readSchemaFromFile(ParquetTypes.scala:515)
	at
org.apache.spark.sql.parquet.ParquetRelation.<init>(ParquetRelation.scala:67)
	at org.apache.spark.sql.SQLContext.parquetFile(SQLContext.scala:542)

I presume its not just me, anyone else come across this ?

Any suggestions how to work around it ? can I set an option like
"old.parquet.format" or something ?



--
View this message in context: http://apache-spark-user-list.1001560.n3.nabble.com/Spark-1-3-0-1-3-1-produces-java-lang-NoSuchFieldError-NO-FILTER-tp22897.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.

---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscribe@spark.apache.org
For additional commands, e-mail: user-help@spark.apache.org


Mime
View raw message