spark-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Apache Spark (JIRA)" <j...@apache.org>
Subject [jira] [Assigned] (SPARK-15388) spark sql "CREATE FUNCTION" throws exception with hive 1.2.1
Date Wed, 18 May 2016 20:07:13 GMT

     [ https://issues.apache.org/jira/browse/SPARK-15388?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]

Apache Spark reassigned SPARK-15388:
------------------------------------

    Assignee:     (was: Apache Spark)

> spark sql "CREATE FUNCTION" throws exception with hive 1.2.1
> ------------------------------------------------------------
>
>                 Key: SPARK-15388
>                 URL: https://issues.apache.org/jira/browse/SPARK-15388
>             Project: Spark
>          Issue Type: Bug
>          Components: SQL
>    Affects Versions: 2.0.0
>            Reporter: Yang Wang
>
> spark.sql("CREATE FUNCTION MY_FUNCTION_1 AS 'com.haizhi.bdp.udf.UDFGetGeoCode'") throws
org.apache.spark.sql.AnalysisException. 
> I was using hive whose version is 1.2.1
> Full stack trace is as follows:
>  Exception in thread "main" org.apache.spark.sql.AnalysisException: org.apache.hadoop.hive.ql.metadata.HiveException:
MetaException(message:NoSuchObjectException(message:Function     bdp.GET_GEO_CODE does not
exist));
>      at org.apache.spark.sql.hive.HiveExternalCatalog.withClient(HiveExternalCatalog.scala:71)
>      at org.apache.spark.sql.hive.HiveExternalCatalog.functionExists(HiveExternalCatalog.scala:323)
>      at org.apache.spark.sql.catalyst.catalog.SessionCatalog.functionExists(SessionCatalog.scala:712)
>      at org.apache.spark.sql.catalyst.catalog.SessionCatalog.createFunction(SessionCatalog.scala:663)
>      at org.apache.spark.sql.execution.command.CreateFunction.run(functions.scala:68)
>      at org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult$lzycompute(commands.scala:57)
>      at org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult(commands.scala:55)
>      at org.apache.spark.sql.execution.command.ExecutedCommandExec.doExecute(commands.scala:69)
>      at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:115)
>      at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:115)
>      at org.apache.spark.sql.execution.SparkPlan$$anonfun$executeQuery$1.apply(SparkPlan.scala:136)
>      at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
>      at org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:133)
>      at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:114)
>      at org.apache.spark.sql.execution.QueryExecution.toRdd$lzycompute(QueryExecution.scala:85)
>      at org.apache.spark.sql.execution.QueryExecution.toRdd(QueryExecution.scala:85)
>      at org.apache.spark.sql.Dataset.<init>(Dataset.scala:187)
>      at org.apache.spark.sql.Dataset.<init>(Dataset.scala:168)
>      at org.apache.spark.sql.Dataset$.ofRows(Dataset.scala:63)
>      at org.apache.spark.sql.SparkSession.sql(SparkSession.scala:541)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org


Mime
View raw message