spark-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Michael Armbrust (JIRA)" <j...@apache.org>
Subject [jira] [Resolved] (SPARK-3940) SQL console prints error messages three times
Date Tue, 21 Oct 2014 00:16:35 GMT

     [ https://issues.apache.org/jira/browse/SPARK-3940?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]

Michael Armbrust resolved SPARK-3940.
-------------------------------------
       Resolution: Fixed
    Fix Version/s: 1.2.0

Issue resolved by pull request 2790
[https://github.com/apache/spark/pull/2790]

> SQL console prints error messages three times
> ---------------------------------------------
>
>                 Key: SPARK-3940
>                 URL: https://issues.apache.org/jira/browse/SPARK-3940
>             Project: Spark
>          Issue Type: Bug
>          Components: SQL
>    Affects Versions: 1.1.0
>            Reporter: wangxj
>              Labels: patch
>             Fix For: 1.2.0
>
>   Original Estimate: 0.05h
>  Remaining Estimate: 0.05h
>
> if an error of SQL,the console print Error three times。
> eg:
> {noformat}
> spark-sql> show tablesss;
> show tablesss;
> 14/10/13 20:56:29 INFO ParseDriver: Parsing command: show tablesss
> NoViableAltException(26@[598:1: ddlStatement : ( createDatabaseStatement | switchDatabaseStatement
| dropDatabaseStatement | createTableStatement | dropTableStatement | truncateTableStatement
| alterStatement | descStatement | showStatement | metastoreCheck | createViewStatement |
dropViewStatement | createFunctionStatement | createMacroStatement | createIndexStatement
| dropIndexStatement | dropFunctionStatement | dropMacroStatement | analyzeStatement | lockStatement
| unlockStatement | createRoleStatement | dropRoleStatement | grantPrivileges | revokePrivileges
| showGrants | showRoleGrants | grantRole | revokeRole );])
> 	at org.antlr.runtime.DFA.noViableAlt(DFA.java:158)
> 	at org.antlr.runtime.DFA.predict(DFA.java:144)
> 	at org.apache.hadoop.hive.ql.parse.HiveParser.ddlStatement(HiveParser.java:1962)
> 	at org.apache.hadoop.hive.ql.parse.HiveParser.execStatement(HiveParser.java:1298)
> 	at org.apache.hadoop.hive.ql.parse.HiveParser.statement(HiveParser.java:938)
> 	at org.apache.hadoop.hive.ql.parse.ParseDriver.parse(ParseDriver.java:190)
> 	at org.apache.hadoop.hive.ql.parse.ParseDriver.parse(ParseDriver.java:161)
> 	at org.apache.spark.sql.hive.HiveQl$.getAst(HiveQl.scala:218)
> 	at org.apache.spark.sql.hive.HiveQl$.createPlan(HiveQl.scala:226)
> 	at org.apache.spark.sql.hive.ExtendedHiveQlParser$$anonfun$hiveQl$1.apply(ExtendedHiveQlParser.scala:50)
> 	at org.apache.spark.sql.hive.ExtendedHiveQlParser$$anonfun$hiveQl$1.apply(ExtendedHiveQlParser.scala:49)
> 	at scala.util.parsing.combinator.Parsers$Success.map(Parsers.scala:136)
> 	at scala.util.parsing.combinator.Parsers$Success.map(Parsers.scala:135)
> 	at scala.util.parsing.combinator.Parsers$Parser$$anonfun$map$1.apply(Parsers.scala:242)
> 	at scala.util.parsing.combinator.Parsers$Parser$$anonfun$map$1.apply(Parsers.scala:242)
> 	at scala.util.parsing.combinator.Parsers$$anon$3.apply(Parsers.scala:222)
> 	at scala.util.parsing.combinator.Parsers$Parser$$anonfun$append$1$$anonfun$apply$2.apply(Parsers.scala:254)
> 	at scala.util.parsing.combinator.Parsers$Parser$$anonfun$append$1$$anonfun$apply$2.apply(Parsers.scala:254)
> 	at scala.util.parsing.combinator.Parsers$Failure.append(Parsers.scala:202)
> 	at scala.util.parsing.combinator.Parsers$Parser$$anonfun$append$1.apply(Parsers.scala:254)
> 	at scala.util.parsing.combinator.Parsers$Parser$$anonfun$append$1.apply(Parsers.scala:254)
> 	at scala.util.parsing.combinator.Parsers$$anon$3.apply(Parsers.scala:222)
> 	at scala.util.parsing.combinator.Parsers$$anon$2$$anonfun$apply$14.apply(Parsers.scala:891)
> 	at scala.util.parsing.combinator.Parsers$$anon$2$$anonfun$apply$14.apply(Parsers.scala:891)
> 	at scala.util.DynamicVariable.withValue(DynamicVariable.scala:57)
> 	at scala.util.parsing.combinator.Parsers$$anon$2.apply(Parsers.scala:890)
> 	at scala.util.parsing.combinator.PackratParsers$$anon$1.apply(PackratParsers.scala:110)
> 	at org.apache.spark.sql.catalyst.AbstractSparkSQLParser.apply(SparkSQLParser.scala:31)
> 	at org.apache.spark.sql.hive.HiveQl$$anonfun$3.apply(HiveQl.scala:130)
> 	at org.apache.spark.sql.hive.HiveQl$$anonfun$3.apply(HiveQl.scala:130)
> 	at org.apache.spark.sql.catalyst.SparkSQLParser$$anonfun$org$apache$spark$sql$catalyst$SparkSQLParser$$others$1.apply(SparkSQLParser.scala:184)
> 	at org.apache.spark.sql.catalyst.SparkSQLParser$$anonfun$org$apache$spark$sql$catalyst$SparkSQLParser$$others$1.apply(SparkSQLParser.scala:183)
> 	at scala.util.parsing.combinator.Parsers$Success.map(Parsers.scala:136)
> 	at scala.util.parsing.combinator.Parsers$Success.map(Parsers.scala:135)
> 	at scala.util.parsing.combinator.Parsers$Parser$$anonfun$map$1.apply(Parsers.scala:242)
> 	at scala.util.parsing.combinator.Parsers$Parser$$anonfun$map$1.apply(Parsers.scala:242)
> 	at scala.util.parsing.combinator.Parsers$$anon$3.apply(Parsers.scala:222)
> 	at scala.util.parsing.combinator.Parsers$Parser$$anonfun$append$1$$anonfun$apply$2.apply(Parsers.scala:254)
> 	at scala.util.parsing.combinator.Parsers$Parser$$anonfun$append$1$$anonfun$apply$2.apply(Parsers.scala:254)
> 	at scala.util.parsing.combinator.Parsers$Failure.append(Parsers.scala:202)
> 	at scala.util.parsing.combinator.Parsers$Parser$$anonfun$append$1.apply(Parsers.scala:254)
> 	at scala.util.parsing.combinator.Parsers$Parser$$anonfun$append$1.apply(Parsers.scala:254)
> 	at scala.util.parsing.combinator.Parsers$$anon$3.apply(Parsers.scala:222)
> 	at scala.util.parsing.combinator.Parsers$$anon$2$$anonfun$apply$14.apply(Parsers.scala:891)
> 	at scala.util.parsing.combinator.Parsers$$anon$2$$anonfun$apply$14.apply(Parsers.scala:891)
> 	at scala.util.DynamicVariable.withValue(DynamicVariable.scala:57)
> 	at scala.util.parsing.combinator.Parsers$$anon$2.apply(Parsers.scala:890)
> 	at scala.util.parsing.combinator.PackratParsers$$anon$1.apply(PackratParsers.scala:110)
> 	at org.apache.spark.sql.catalyst.AbstractSparkSQLParser.apply(SparkSQLParser.scala:31)
> 	at org.apache.spark.sql.hive.HiveQl$.parseSql(HiveQl.scala:221)
> 	at org.apache.spark.sql.hive.HiveContext.sql(HiveContext.scala:98)
> 	at org.apache.spark.sql.hive.thriftserver.SparkSQLDriver.run(SparkSQLDriver.scala:58)
> 	at org.apache.spark.sql.hive.thriftserver.SparkSQLCLIDriver.processCmd(SparkSQLCLIDriver.scala:274)
> 	at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:413)
> 	at org.apache.spark.sql.hive.thriftserver.SparkSQLCLIDriver$.main(SparkSQLCLIDriver.scala:209)
> 	at org.apache.spark.sql.hive.thriftserver.SparkSQLCLIDriver.main(SparkSQLCLIDriver.scala)
> 14/10/13 20:56:30 ERROR SparkSQLDriver: Failed in [show tablesss]
> org.apache.spark.sql.hive.HiveQl$ParseException: Failed to parse: show tablesss
> 	at org.apache.spark.sql.hive.HiveQl$.createPlan(HiveQl.scala:225)
> 	at org.apache.spark.sql.hive.ExtendedHiveQlParser$$anonfun$hiveQl$1.apply(ExtendedHiveQlParser.scala:50)
> 	at org.apache.spark.sql.hive.ExtendedHiveQlParser$$anonfun$hiveQl$1.apply(ExtendedHiveQlParser.scala:49)
> 	at scala.util.parsing.combinator.Parsers$Success.map(Parsers.scala:136)
> 	at scala.util.parsing.combinator.Parsers$Success.map(Parsers.scala:135)
> 	at scala.util.parsing.combinator.Parsers$Parser$$anonfun$map$1.apply(Parsers.scala:242)
> 	at scala.util.parsing.combinator.Parsers$Parser$$anonfun$map$1.apply(Parsers.scala:242)
> 	at scala.util.parsing.combinator.Parsers$$anon$3.apply(Parsers.scala:222)
> 	at scala.util.parsing.combinator.Parsers$Parser$$anonfun$append$1$$anonfun$apply$2.apply(Parsers.scala:254)
> 	at scala.util.parsing.combinator.Parsers$Parser$$anonfun$append$1$$anonfun$apply$2.apply(Parsers.scala:254)
> 	at scala.util.parsing.combinator.Parsers$Failure.append(Parsers.scala:202)
> 	at scala.util.parsing.combinator.Parsers$Parser$$anonfun$append$1.apply(Parsers.scala:254)
> 	at scala.util.parsing.combinator.Parsers$Parser$$anonfun$append$1.apply(Parsers.scala:254)
> 	at scala.util.parsing.combinator.Parsers$$anon$3.apply(Parsers.scala:222)
> 	at scala.util.parsing.combinator.Parsers$$anon$2$$anonfun$apply$14.apply(Parsers.scala:891)
> 	at scala.util.parsing.combinator.Parsers$$anon$2$$anonfun$apply$14.apply(Parsers.scala:891)
> 	at scala.util.DynamicVariable.withValue(DynamicVariable.scala:57)
> 	at scala.util.parsing.combinator.Parsers$$anon$2.apply(Parsers.scala:890)
> 	at scala.util.parsing.combinator.PackratParsers$$anon$1.apply(PackratParsers.scala:110)
> 	at org.apache.spark.sql.catalyst.AbstractSparkSQLParser.apply(SparkSQLParser.scala:31)
> 	at org.apache.spark.sql.hive.HiveQl$$anonfun$3.apply(HiveQl.scala:130)
> 	at org.apache.spark.sql.hive.HiveQl$$anonfun$3.apply(HiveQl.scala:130)
> 	at org.apache.spark.sql.catalyst.SparkSQLParser$$anonfun$org$apache$spark$sql$catalyst$SparkSQLParser$$others$1.apply(SparkSQLParser.scala:184)
> 	at org.apache.spark.sql.catalyst.SparkSQLParser$$anonfun$org$apache$spark$sql$catalyst$SparkSQLParser$$others$1.apply(SparkSQLParser.scala:183)
> 	at scala.util.parsing.combinator.Parsers$Success.map(Parsers.scala:136)
> 	at scala.util.parsing.combinator.Parsers$Success.map(Parsers.scala:135)
> 	at scala.util.parsing.combinator.Parsers$Parser$$anonfun$map$1.apply(Parsers.scala:242)
> 	at scala.util.parsing.combinator.Parsers$Parser$$anonfun$map$1.apply(Parsers.scala:242)
> 	at scala.util.parsing.combinator.Parsers$$anon$3.apply(Parsers.scala:222)
> 	at scala.util.parsing.combinator.Parsers$Parser$$anonfun$append$1$$anonfun$apply$2.apply(Parsers.scala:254)
> 	at scala.util.parsing.combinator.Parsers$Parser$$anonfun$append$1$$anonfun$apply$2.apply(Parsers.scala:254)
> 	at scala.util.parsing.combinator.Parsers$Failure.append(Parsers.scala:202)
> 	at scala.util.parsing.combinator.Parsers$Parser$$anonfun$append$1.apply(Parsers.scala:254)
> 	at scala.util.parsing.combinator.Parsers$Parser$$anonfun$append$1.apply(Parsers.scala:254)
> 	at scala.util.parsing.combinator.Parsers$$anon$3.apply(Parsers.scala:222)
> 	at scala.util.parsing.combinator.Parsers$$anon$2$$anonfun$apply$14.apply(Parsers.scala:891)
> 	at scala.util.parsing.combinator.Parsers$$anon$2$$anonfun$apply$14.apply(Parsers.scala:891)
> 	at scala.util.DynamicVariable.withValue(DynamicVariable.scala:57)
> 	at scala.util.parsing.combinator.Parsers$$anon$2.apply(Parsers.scala:890)
> 	at scala.util.parsing.combinator.PackratParsers$$anon$1.apply(PackratParsers.scala:110)
> 	at org.apache.spark.sql.catalyst.AbstractSparkSQLParser.apply(SparkSQLParser.scala:31)
> 	at org.apache.spark.sql.hive.HiveQl$.parseSql(HiveQl.scala:221)
> 	at org.apache.spark.sql.hive.HiveContext.sql(HiveContext.scala:98)
> 	at org.apache.spark.sql.hive.thriftserver.SparkSQLDriver.run(SparkSQLDriver.scala:58)
> 	at org.apache.spark.sql.hive.thriftserver.SparkSQLCLIDriver.processCmd(SparkSQLCLIDriver.scala:274)
> 	at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:413)
> 	at org.apache.spark.sql.hive.thriftserver.SparkSQLCLIDriver$.main(SparkSQLCLIDriver.scala:209)
> 	at org.apache.spark.sql.hive.thriftserver.SparkSQLCLIDriver.main(SparkSQLCLIDriver.scala)
> Caused by: org.apache.hadoop.hive.ql.parse.ParseException: line 1:5 cannot recognize
input near 'show' 'tablesss' '<EOF>' in ddl statement
> 	at org.apache.hadoop.hive.ql.parse.ParseDriver.parse(ParseDriver.java:193)
> 	at org.apache.hadoop.hive.ql.parse.ParseDriver.parse(ParseDriver.java:161)
> 	at org.apache.spark.sql.hive.HiveQl$.getAst(HiveQl.scala:218)
> 	at org.apache.spark.sql.hive.HiveQl$.createPlan(HiveQl.scala:226)
> 	... 47 more
> org.apache.spark.sql.hive.HiveQl$ParseException: Failed to parse: show tablesss
> 	at org.apache.spark.sql.hive.HiveQl$.createPlan(HiveQl.scala:225)
> 	at org.apache.spark.sql.hive.ExtendedHiveQlParser$$anonfun$hiveQl$1.apply(ExtendedHiveQlParser.scala:50)
> 	at org.apache.spark.sql.hive.ExtendedHiveQlParser$$anonfun$hiveQl$1.apply(ExtendedHiveQlParser.scala:49)
> 	at scala.util.parsing.combinator.Parsers$Success.map(Parsers.scala:136)
> 	at scala.util.parsing.combinator.Parsers$Success.map(Parsers.scala:135)
> 	at scala.util.parsing.combinator.Parsers$Parser$$anonfun$map$1.apply(Parsers.scala:242)
> 	at scala.util.parsing.combinator.Parsers$Parser$$anonfun$map$1.apply(Parsers.scala:242)
> 	at scala.util.parsing.combinator.Parsers$$anon$3.apply(Parsers.scala:222)
> 	at scala.util.parsing.combinator.Parsers$Parser$$anonfun$append$1$$anonfun$apply$2.apply(Parsers.scala:254)
> 	at scala.util.parsing.combinator.Parsers$Parser$$anonfun$append$1$$anonfun$apply$2.apply(Parsers.scala:254)
> 	at scala.util.parsing.combinator.Parsers$Failure.append(Parsers.scala:202)
> 	at scala.util.parsing.combinator.Parsers$Parser$$anonfun$append$1.apply(Parsers.scala:254)
> 	at scala.util.parsing.combinator.Parsers$Parser$$anonfun$append$1.apply(Parsers.scala:254)
> 	at scala.util.parsing.combinator.Parsers$$anon$3.apply(Parsers.scala:222)
> 	at scala.util.parsing.combinator.Parsers$$anon$2$$anonfun$apply$14.apply(Parsers.scala:891)
> 	at scala.util.parsing.combinator.Parsers$$anon$2$$anonfun$apply$14.apply(Parsers.scala:891)
> 	at scala.util.DynamicVariable.withValue(DynamicVariable.scala:57)
> 	at scala.util.parsing.combinator.Parsers$$anon$2.apply(Parsers.scala:890)
> 	at scala.util.parsing.combinator.PackratParsers$$anon$1.apply(PackratParsers.scala:110)
> 	at org.apache.spark.sql.catalyst.AbstractSparkSQLParser.apply(SparkSQLParser.scala:31)
> 	at org.apache.spark.sql.hive.HiveQl$$anonfun$3.apply(HiveQl.scala:130)
> 	at org.apache.spark.sql.hive.HiveQl$$anonfun$3.apply(HiveQl.scala:130)
> 	at org.apache.spark.sql.catalyst.SparkSQLParser$$anonfun$org$apache$spark$sql$catalyst$SparkSQLParser$$others$1.apply(SparkSQLParser.scala:184)
> 	at org.apache.spark.sql.catalyst.SparkSQLParser$$anonfun$org$apache$spark$sql$catalyst$SparkSQLParser$$others$1.apply(SparkSQLParser.scala:183)
> 	at scala.util.parsing.combinator.Parsers$Success.map(Parsers.scala:136)
> 	at scala.util.parsing.combinator.Parsers$Success.map(Parsers.scala:135)
> 	at scala.util.parsing.combinator.Parsers$Parser$$anonfun$map$1.apply(Parsers.scala:242)
> 	at scala.util.parsing.combinator.Parsers$Parser$$anonfun$map$1.apply(Parsers.scala:242)
> 	at scala.util.parsing.combinator.Parsers$$anon$3.apply(Parsers.scala:222)
> 	at scala.util.parsing.combinator.Parsers$Parser$$anonfun$append$1$$anonfun$apply$2.apply(Parsers.scala:254)
> 	at scala.util.parsing.combinator.Parsers$Parser$$anonfun$append$1$$anonfun$apply$2.apply(Parsers.scala:254)
> 	at scala.util.parsing.combinator.Parsers$Failure.append(Parsers.scala:202)
> 	at scala.util.parsing.combinator.Parsers$Parser$$anonfun$append$1.apply(Parsers.scala:254)
> 	at scala.util.parsing.combinator.Parsers$Parser$$anonfun$append$1.apply(Parsers.scala:254)
> 	at scala.util.parsing.combinator.Parsers$$anon$3.apply(Parsers.scala:222)
> 	at scala.util.parsing.combinator.Parsers$$anon$2$$anonfun$apply$14.apply(Parsers.scala:891)
> 	at scala.util.parsing.combinator.Parsers$$anon$2$$anonfun$apply$14.apply(Parsers.scala:891)
> 	at scala.util.DynamicVariable.withValue(DynamicVariable.scala:57)
> 	at scala.util.parsing.combinator.Parsers$$anon$2.apply(Parsers.scala:890)
> 	at scala.util.parsing.combinator.PackratParsers$$anon$1.apply(PackratParsers.scala:110)
> 	at org.apache.spark.sql.catalyst.AbstractSparkSQLParser.apply(SparkSQLParser.scala:31)
> 	at org.apache.spark.sql.hive.HiveQl$.parseSql(HiveQl.scala:221)
> 	at org.apache.spark.sql.hive.HiveContext.sql(HiveContext.scala:98)
> 	at org.apache.spark.sql.hive.thriftserver.SparkSQLDriver.run(SparkSQLDriver.scala:58)
> 	at org.apache.spark.sql.hive.thriftserver.SparkSQLCLIDriver.processCmd(SparkSQLCLIDriver.scala:274)
> 	at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:413)
> 	at org.apache.spark.sql.hive.thriftserver.SparkSQLCLIDriver$.main(SparkSQLCLIDriver.scala:209)
> 	at org.apache.spark.sql.hive.thriftserver.SparkSQLCLIDriver.main(SparkSQLCLIDriver.scala)
> Caused by: org.apache.hadoop.hive.ql.parse.ParseException: line 1:5 cannot recognize
input near 'show' 'tablesss' '<EOF>' in ddl statement
> 	at org.apache.hadoop.hive.ql.parse.ParseDriver.parse(ParseDriver.java:193)
> 	at org.apache.hadoop.hive.ql.parse.ParseDriver.parse(ParseDriver.java:161)
> 	at org.apache.spark.sql.hive.HiveQl$.getAst(HiveQl.scala:218)
> 	at org.apache.spark.sql.hive.HiveQl$.createPlan(HiveQl.scala:226)
> 	... 47 more
> 14/10/13 20:56:30 ERROR CliDriver: org.apache.spark.sql.hive.HiveQl$ParseException: Failed
to parse: show tablesss
> 	at org.apache.spark.sql.hive.HiveQl$.createPlan(HiveQl.scala:225)
> 	at org.apache.spark.sql.hive.ExtendedHiveQlParser$$anonfun$hiveQl$1.apply(ExtendedHiveQlParser.scala:50)
> 	at org.apache.spark.sql.hive.ExtendedHiveQlParser$$anonfun$hiveQl$1.apply(ExtendedHiveQlParser.scala:49)
> 	at scala.util.parsing.combinator.Parsers$Success.map(Parsers.scala:136)
> 	at scala.util.parsing.combinator.Parsers$Success.map(Parsers.scala:135)
> 	at scala.util.parsing.combinator.Parsers$Parser$$anonfun$map$1.apply(Parsers.scala:242)
> 	at scala.util.parsing.combinator.Parsers$Parser$$anonfun$map$1.apply(Parsers.scala:242)
> 	at scala.util.parsing.combinator.Parsers$$anon$3.apply(Parsers.scala:222)
> 	at scala.util.parsing.combinator.Parsers$Parser$$anonfun$append$1$$anonfun$apply$2.apply(Parsers.scala:254)
> 	at scala.util.parsing.combinator.Parsers$Parser$$anonfun$append$1$$anonfun$apply$2.apply(Parsers.scala:254)
> 	at scala.util.parsing.combinator.Parsers$Failure.append(Parsers.scala:202)
> 	at scala.util.parsing.combinator.Parsers$Parser$$anonfun$append$1.apply(Parsers.scala:254)
> 	at scala.util.parsing.combinator.Parsers$Parser$$anonfun$append$1.apply(Parsers.scala:254)
> 	at scala.util.parsing.combinator.Parsers$$anon$3.apply(Parsers.scala:222)
> 	at scala.util.parsing.combinator.Parsers$$anon$2$$anonfun$apply$14.apply(Parsers.scala:891)
> 	at scala.util.parsing.combinator.Parsers$$anon$2$$anonfun$apply$14.apply(Parsers.scala:891)
> 	at scala.util.DynamicVariable.withValue(DynamicVariable.scala:57)
> 	at scala.util.parsing.combinator.Parsers$$anon$2.apply(Parsers.scala:890)
> 	at scala.util.parsing.combinator.PackratParsers$$anon$1.apply(PackratParsers.scala:110)
> 	at org.apache.spark.sql.catalyst.AbstractSparkSQLParser.apply(SparkSQLParser.scala:31)
> 	at org.apache.spark.sql.hive.HiveQl$$anonfun$3.apply(HiveQl.scala:130)
> 	at org.apache.spark.sql.hive.HiveQl$$anonfun$3.apply(HiveQl.scala:130)
> 	at org.apache.spark.sql.catalyst.SparkSQLParser$$anonfun$org$apache$spark$sql$catalyst$SparkSQLParser$$others$1.apply(SparkSQLParser.scala:184)
> 	at org.apache.spark.sql.catalyst.SparkSQLParser$$anonfun$org$apache$spark$sql$catalyst$SparkSQLParser$$others$1.apply(SparkSQLParser.scala:183)
> 	at scala.util.parsing.combinator.Parsers$Success.map(Parsers.scala:136)
> 	at scala.util.parsing.combinator.Parsers$Success.map(Parsers.scala:135)
> 	at scala.util.parsing.combinator.Parsers$Parser$$anonfun$map$1.apply(Parsers.scala:242)
> 	at scala.util.parsing.combinator.Parsers$Parser$$anonfun$map$1.apply(Parsers.scala:242)
> 	at scala.util.parsing.combinator.Parsers$$anon$3.apply(Parsers.scala:222)
> 	at scala.util.parsing.combinator.Parsers$Parser$$anonfun$append$1$$anonfun$apply$2.apply(Parsers.scala:254)
> 	at scala.util.parsing.combinator.Parsers$Parser$$anonfun$append$1$$anonfun$apply$2.apply(Parsers.scala:254)
> 	at scala.util.parsing.combinator.Parsers$Failure.append(Parsers.scala:202)
> 	at scala.util.parsing.combinator.Parsers$Parser$$anonfun$append$1.apply(Parsers.scala:254)
> 	at scala.util.parsing.combinator.Parsers$Parser$$anonfun$append$1.apply(Parsers.scala:254)
> 	at scala.util.parsing.combinator.Parsers$$anon$3.apply(Parsers.scala:222)
> 	at scala.util.parsing.combinator.Parsers$$anon$2$$anonfun$apply$14.apply(Parsers.scala:891)
> 	at scala.util.parsing.combinator.Parsers$$anon$2$$anonfun$apply$14.apply(Parsers.scala:891)
> 	at scala.util.DynamicVariable.withValue(DynamicVariable.scala:57)
> 	at scala.util.parsing.combinator.Parsers$$anon$2.apply(Parsers.scala:890)
> 	at scala.util.parsing.combinator.PackratParsers$$anon$1.apply(PackratParsers.scala:110)
> 	at org.apache.spark.sql.catalyst.AbstractSparkSQLParser.apply(SparkSQLParser.scala:31)
> 	at org.apache.spark.sql.hive.HiveQl$.parseSql(HiveQl.scala:221)
> 	at org.apache.spark.sql.hive.HiveContext.sql(HiveContext.scala:98)
> 	at org.apache.spark.sql.hive.thriftserver.SparkSQLDriver.run(SparkSQLDriver.scala:58)
> 	at org.apache.spark.sql.hive.thriftserver.SparkSQLCLIDriver.processCmd(SparkSQLCLIDriver.scala:274)
> 	at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:413)
> 	at org.apache.spark.sql.hive.thriftserver.SparkSQLCLIDriver$.main(SparkSQLCLIDriver.scala:209)
> 	at org.apache.spark.sql.hive.thriftserver.SparkSQLCLIDriver.main(SparkSQLCLIDriver.scala)
> Caused by: org.apache.hadoop.hive.ql.parse.ParseException: line 1:5 cannot recognize
input near 'show' 'tablesss' '<EOF>' in ddl statement
> 	at org.apache.hadoop.hive.ql.parse.ParseDriver.parse(ParseDriver.java:193)
> 	at org.apache.hadoop.hive.ql.parse.ParseDriver.parse(ParseDriver.java:161)
> 	at org.apache.spark.sql.hive.HiveQl$.getAst(HiveQl.scala:218)
> 	at org.apache.spark.sql.hive.HiveQl$.createPlan(HiveQl.scala:226)
> 	... 47 more
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org


Mime
View raw message