spark-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Aaron Davidson <ilike...@gmail.com>
Subject Re: SparkSQL "where" with BigDecimal type gives stacktrace
Date Sun, 30 Mar 2014 18:04:58 GMT
Well, the error is coming from this case statement not matching on the
BigDecimal type:
https://github.com/apache/spark/blob/master/sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/ScalaReflection.scala#L41

This seems to be a bug because there is a corresponding Catalyst DataType
for BigDecimal, just no way to produce a schema for it. A patch should be
straightforward enough to match against typeOf[BigDecimal] assuming this
was not for some reason intentional.


On Sun, Mar 30, 2014 at 10:43 AM, smallmonkey491@hotmail.com <
smallmonkey491@hotmail.com> wrote:

>  can I get the whole operation? then i can try to locate  the error
>
> ------------------------------
>  smallmonkey491@hotmail.com
>
>  *From:* Manoj Samel <manojsameltech@gmail.com>
> *Date:* 2014-03-31 01:16
> *To:* user <user@spark.apache.org>
> *Subject:* SparkSQL "where" with BigDecimal type gives stacktrace
>  Hi,
>
> If I do a where on BigDecimal, I get a stack trace. Changing BigDecimal to
> Double works ...
> ....
>  scala> case class JournalLine(account: String, credit: BigDecimal,
> debit: BigDecimal, date: String, company: String, currency: String,
> costcenter: String, region: String)
> defined class JournalLine
> ...
>  scala> jl.where('credit > 0).foreach(println)
> scala.MatchError: scala.BigDecimal (of class
> scala.reflect.internal.Types$TypeRef$$anon$3)
> at
> org.apache.sparksql.catalyst.ScalaReflection$.schemaFor(ScalaReflection.scala:41)
> at
> org.apache.spark.sql.catalyst.ScalaReflection$$anonfun$schemaFor$1.apply(ScalaReflection.scala:45)
> at
> org.apache.spark.sql.catalyst.ScalaReflection$$anonfun$schemaFor$1.apply(ScalaReflection.scala:45)
> at
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
> at
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
> at scala.collection.immutable.List.foreach(List.scala:318)
> at scala.collection.TraversableLike$class.map(TraversableLike.scala:244)
> at scala.collection.AbstractTraversable.map(Traversable.scala:105)
> at
> org.apache.spark.sql.catalyst.ScalaReflection$.schemaFor(ScalaReflection.scala:45)
> at
> org.apache.spark.sql.catalyst.ScalaReflection$.schemaFor(ScalaReflection.scala:38)
> at
> org.apache.spark.sql.catalyst.ScalaReflection$.attributesFor(ScalaReflection.scala:32)
> at
> org.apache.spark.sql.execution.ExistingRdd$.fromProductRdd(basicOperators.scala:128)
> at org.apache.spark.sql.SQLContext.createSchemaRDD(SQLContext.scala:79)
> at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:39)
> at $iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:44)
> at $iwC$$iwC$$iwC$$iwC.<init>(<console>:46)
> at $iwC$$iwC$$iwC.<init>(<console>:48)
> at $iwC$$iwC.<init>(<console>:50)
> at $iwC.<init>(<console>:52)
> at <init>(<console>:54)
> at .<init>(<console>:58)
> at .<clinit>(<console>)
> at .<init>(<console>:7)
> at .<clinit>(<console>)
> at $print(<console>)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
> at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:601)
> at
> org.apache.spark.repl.SparkIMain$ReadEvalPrint.call(SparkIMain.scala:777)
> at
> org.apache.spark.repl.SparkIMain$Request.loadAndRun(SparkIMain.scala:1045)
> at org.apache.spark.repl.SparkIMain.loadAndRunReq$1(SparkIMain.scala:614)
> at org.apache.spark.repl.SparkIMain.interpret(SparkIMain.scala:645)
> at org.apache.spark.repl.SparkIMain.interpret(SparkIMain.scala:609)
> at org.apache.spark.repl.SparkILoop.reallyInterpret$1(SparkILoop.scala:795)
> at
> org.apache.spark.repl.SparkILoop.interpretStartingWith(SparkILoop.scala:840)
> at org.apache.spark.repl.SparkILoop.command(SparkILoop.scala:752)
> at org.apache.spark.repl.SparkILoop.processLine$1(SparkILoop.scala:600)
> at org.apache.spark.repl.SparkILoop.innerLoop$1(SparkILoop.scala:607)
>  at org.apache.spark.repl.SparkILoop.loop(SparkILoop.scala:610)
> at
> org.apache.spark.repl.SparkILoop$$anonfun$process$1.apply$mcZ$sp(SparkILoop.scala:935)
> at
> org.apache.spark.repl.SparkILoop$$anonfun$process$1.apply(SparkILoop.scala:883)
> at
> org.apache.spark.repl.SparkILoop$$anonfun$process$1.apply(SparkILoop.scala:883)
> at
> scala.tools.nsc.util.ScalaClassLoader$.savingContextLoader(ScalaClassLoader.scala:135)
> at org.apache.spark.repl.SparkILoop.process(SparkILoop.scala:883)
> at org.apache.spark.repl.SparkILoop.process(SparkILoop.scala:981)
> at org.apache.spark.repl.Main$.main(Main.scala:31)
> at org.apache.spark.repl.Main.main(Main.scala)
>
> Thanks,
>
>
>

Mime
View raw message