spark-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Patrick Wendell (JIRA)" <j...@apache.org>
Subject [jira] [Resolved] (SPARK-1354) Fail to resolve attribute when query with table name as a qualifer in SQLContext
Date Sun, 30 Mar 2014 17:06:14 GMT

     [ https://issues.apache.org/jira/browse/SPARK-1354?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]

Patrick Wendell resolved SPARK-1354.
------------------------------------

       Resolution: Fixed
    Fix Version/s: 1.0.0

> Fail to resolve attribute when query with table name as a qualifer in SQLContext
> --------------------------------------------------------------------------------
>
>                 Key: SPARK-1354
>                 URL: https://issues.apache.org/jira/browse/SPARK-1354
>             Project: Apache Spark
>          Issue Type: Bug
>          Components: SQL
>    Affects Versions: 1.0.0
>            Reporter: Saisai Shao
>             Fix For: 1.0.0
>
>
> For SQLContext with SimpleCatelog, table name does not register into attribute as a qualifier,
so query like "SELECT * FROM records JOIN records1 ON records.key = records1,key" will be
failed. The logical plan cannot resolve "records.key" because of missing qualifier "records".
The physical plan shows as below
>     Project [*]
>      Filter ('records.key = 'records1.key)
>       CartesianProduct
>        ExistingRdd [key#0,value#1], MappedRDD[2] at map at basicOperators.scala:124
>        ParquetTableScan [key#2,value#3], (ParquetRelation ParquetFile, pair.parquet),
None)
> And the exception shows:
> org.apache.spark.sql.catalyst.errors.package$TreeNodeException: No function to evaluate
expression. type: UnresolvedAttribute, tree: 'records.key
>         at org.apache.spark.sql.catalyst.expressions.Expression.apply(Expression.scala:54)
>         at org.apache.spark.sql.catalyst.expressions.Equals.apply(predicates.scala:112)
>         at org.apache.spark.sql.execution.Filter$$anonfun$2$$anonfun$apply$1.apply(basicOperators.scala:43)
>         at org.apache.spark.sql.execution.Filter$$anonfun$2$$anonfun$apply$1.apply(basicOperators.scala:43)
>         at scala.collection.Iterator$$anon$14.hasNext(Iterator.scala:390)
>         at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327)
>         at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327)
>         at scala.collection.Iterator$class.foreach(Iterator.scala:727)
>         at scala.collection.AbstractIterator.foreach(Iterator.scala:1157)
>         at org.apache.spark.rdd.RDD$$anonfun$foreach$1.apply(RDD.scala:643)
>         at org.apache.spark.rdd.RDD$$anonfun$foreach$1.apply(RDD.scala:643)
>         at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:936)
>         at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:936)
>         at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:111)
>         at org.apache.spark.scheduler.Task.run(Task.scala:52)
>         at org.apache.spark.executor.Executor$TaskRunner$$anonfun$run$1.apply$mcV$sp(Executor.scala:211)
>         at org.apache.spark.deploy.SparkHadoopUtil.runAsUser(SparkHadoopUtil.scala:46)
>         at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:176)
>         at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)
>         at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
>         at java.lang.Thread.run(Thread.java:722)



--
This message was sent by Atlassian JIRA
(v6.2#6252)

Mime
View raw message