spark-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Stephen Boesch <java...@gmail.com>
Subject SQLQuerySuite error
Date Thu, 24 Jul 2014 19:04:55 GMT
Are other developers seeing the following error for the recently added
substr() method?  If not, any ideas why the following invocation of tests
would be failing for me - i.e. how the given invocation would need to be
tweaked?

mvn -Pyarn -Pcdh5 test  -pl sql/core
-DwildcardSuites=org.apache.spark.sql.SQLQuerySuite

(note cdh5 is a custom profile for cdh5.0.0 but should not be affecting
these results)

Only the test("SPARK-2407 Added Parser of SQL SUBSTR()") fails: all of the
other 33 tests pass.

SQLQuerySuite:
- SPARK-2041 column name equals tablename
- SPARK-2407 Added Parser of SQL SUBSTR() *** FAILED ***
  Exception thrown while executing query:
  == Logical Plan ==
  java.lang.UnsupportedOperationException
  == Optimized Logical Plan ==
  java.lang.UnsupportedOperationException
  == Physical Plan ==
  java.lang.UnsupportedOperationException
  == Exception ==
  java.lang.UnsupportedOperationException
  java.lang.UnsupportedOperationException
  at
org.apache.spark.sql.catalyst.analysis.EmptyFunctionRegistry$.lookupFunction(FunctionRegistry.scala:33)
  at
org.apache.spark.sql.catalyst.analysis.Analyzer$ResolveFunctions$$anonfun$apply$5$$anonfun$applyOrElse$3.applyOrElse(Analyzer.scala:131)
  at
org.apache.spark.sql.catalyst.analysis.Analyzer$ResolveFunctions$$anonfun$apply$5$$anonfun$applyOrElse$3.applyOrElse(Analyzer.scala:129)
  at
org.apache.spark.sql.catalyst.trees.TreeNode.transformDown(TreeNode.scala:165)
  at
org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$4.apply(TreeNode.scala:183)
  at scala.collection.Iterator$$anon$11.next(Iterator.scala:328)
  at scala.collection.Iterator$class.foreach(Iterator.scala:727)
  at scala.collection.AbstractIterator.foreach(Iterator.scala:1157)
  at
scala.collection.generic.Growable$class.$plus$plus$eq(Growable.scala:48)
  at
scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:103)
  at
scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:47)
  at scala.collection.TraversableOnce$class.to(TraversableOnce.scala:273)
  at scala.collection.AbstractIterator.to(Iterator.scala:1157)
  at
scala.collection.TraversableOnce$class.toBuffer(TraversableOnce.scala:265)
  at scala.collection.AbstractIterator.toBuffer(Iterator.scala:1157)
  at
scala.collection.TraversableOnce$class.toArray(TraversableOnce.scala:252)
  at scala.collection.AbstractIterator.toArray(Iterator.scala:1157)
  at
org.apache.spark.sql.catalyst.trees.TreeNode.transformChildrenDown(TreeNode.scala:212)
  at
org.apache.spark.sql.catalyst.trees.TreeNode.transformDown(TreeNode.scala:168)
  at org.apache.spark.sql.catalyst.plans.QueryPlan.org
$apache$spark$sql$catalyst$plans$QueryPlan$$transformExpressionDown$1(QueryPlan.scala:52)
  at
org.apache.spark.sql.catalyst.plans.QueryPlan$$anonfun$1$$anonfun$apply$1.apply(QueryPlan.scala:66)
  at
scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
  at
scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
  at scala.collection.immutable.List.foreach(List.scala:318)
  at scala.collection.TraversableLike$class.map(TraversableLike.scala:244)
  at scala.collection.AbstractTraversable.map(Traversable.scala:105)
  at

Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message