spark-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Yin Huai <huaiyin....@gmail.com>
Subject Re: Spark SQL : Join throws exception
Date Tue, 08 Jul 2014 00:16:44 GMT
Hi Subacini,

Just want to follow up on this issue. SPARK-2339 has been merged into the
master and 1.0 branch.

Thanks,

Yin


On Tue, Jul 1, 2014 at 2:00 PM, Yin Huai <huaiyin.thu@gmail.com> wrote:

> Seems it is a bug. I have opened
> https://issues.apache.org/jira/browse/SPARK-2339 to track it.
>
> Thank you for reporting it.
>
> Yin
>
>
> On Tue, Jul 1, 2014 at 12:06 PM, Subacini B <subacini@gmail.com> wrote:
>
>> Hi All,
>>
>> Running this join query
>>  sql("SELECT * FROM  A_TABLE A JOIN  B_TABLE B WHERE
>> A.status=1").collect().foreach(println)
>>
>> throws
>>
>> Exception in thread "main" org.apache.spark.SparkException: Job aborted
>> due to stage failure: Task 1.0:3 failed 4 times, most recent failure:
>> Exception failure in TID 12 on host X.X.X.X: *org.apache.spark.sql.catalyst.errors.package$TreeNodeException:
>> No function to evaluate expression. type: UnresolvedAttribute, tree:
>> 'A.status*
>>
>> org.apache.spark.sql.catalyst.analysis.UnresolvedAttribute.eval(unresolved.scala:59)
>>
>> org.apache.spark.sql.catalyst.expressions.Equals.eval(predicates.scala:147)
>>
>> org.apache.spark.sql.catalyst.expressions.And.eval(predicates.scala:100)
>>
>> org.apache.spark.sql.execution.Filter$$anonfun$2$$anonfun$apply$1.apply(basicOperators.scala:52)
>>
>> org.apache.spark.sql.execution.Filter$$anonfun$2$$anonfun$apply$1.apply(basicOperators.scala:52)
>>         scala.collection.Iterator$$anon$14.hasNext(Iterator.scala:390)
>>         scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327)
>>
>> org.apache.spark.sql.execution.Aggregate$$anonfun$execute$1$$anonfun$1.apply(Aggregate.scala:137)
>>
>> org.apache.spark.sql.execution.Aggregate$$anonfun$execute$1$$anonfun$1.apply(Aggregate.scala:134)
>>         org.apache.spark.rdd.RDD$$anonfun$12.apply(RDD.scala:559)
>>         org.apache.spark.rdd.RDD$$anonfun$12.apply(RDD.scala:559)
>>
>> org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:35)
>>         org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:262)
>>         org.apache.spark.rdd.RDD.iterator(RDD.scala:229)
>>
>> org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:35)
>>         org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:262)
>>         org.apache.spark.rdd.RDD.iterator(RDD.scala:229)
>>
>> org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:158)
>>
>> org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:99)
>>         org.apache.spark.scheduler.Task.run(Task.scala:51)
>>
>> org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:187)
>>
>> java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895)
>>
>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918)
>>         java.lang.Thread.run(Thread.java:695)
>> Driver stacktrace:
>>
>> Can someone help me.
>>
>> Thanks in advance.
>>
>>
>

Mime
View raw message