spark-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Apache Spark (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (SPARK-10634) The spark sql fails if the where clause contains a string with " in it.
Date Mon, 03 Oct 2016 18:23:20 GMT

    [ https://issues.apache.org/jira/browse/SPARK-10634?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15543059#comment-15543059
] 

Apache Spark commented on SPARK-10634:
--------------------------------------

User 'dilipbiswal' has created a pull request for this issue:
https://github.com/apache/spark/pull/15332

> The spark sql fails if the where clause contains a string with " in it.
> -----------------------------------------------------------------------
>
>                 Key: SPARK-10634
>                 URL: https://issues.apache.org/jira/browse/SPARK-10634
>             Project: Spark
>          Issue Type: Bug
>          Components: SQL
>    Affects Versions: 1.3.1
>            Reporter: Prachi Burathoki
>
> When running a sql query in which the where clause contains a string with " in it, the
sql parser throws an error.
> Caused by: java.lang.RuntimeException: [1.127] failure: ``)'' expected but identifier
test found
> SELECT clistc215647292, corc1749453704, candc1501025950, SYSIBM_ROW_NUMBER FROM TABLE_1
WHERE ((clistc215647292 = "this is a "test""))
>                                                                                     
                                         ^
> 	at scala.sys.package$.error(package.scala:27)
> 	at org.apache.spark.sql.catalyst.AbstractSparkSQLParser.apply(AbstractSparkSQLParser.scala:40)
> 	at org.apache.spark.sql.SQLContext$$anonfun$2.apply(SQLContext.scala:134)
> 	at org.apache.spark.sql.SQLContext$$anonfun$2.apply(SQLContext.scala:134)
> 	at org.apache.spark.sql.SparkSQLParser$$anonfun$org$apache$spark$sql$SparkSQLParser$$others$1.apply(SparkSQLParser.scala:96)
> 	at org.apache.spark.sql.SparkSQLParser$$anonfun$org$apache$spark$sql$SparkSQLParser$$others$1.apply(SparkSQLParser.scala:95)
> 	at scala.util.parsing.combinator.Parsers$Success.map(Parsers.scala:136)
> 	at scala.util.parsing.combinator.Parsers$Success.map(Parsers.scala:135)
> 	at scala.util.parsing.combinator.Parsers$Parser$$anonfun$map$1.apply(Parsers.scala:242)
> 	at scala.util.parsing.combinator.Parsers$Parser$$anonfun$map$1.apply(Parsers.scala:242)
> 	at scala.util.parsing.combinator.Parsers$$anon$3.apply(Parsers.scala:222)
> 	at scala.util.parsing.combinator.Parsers$Parser$$anonfun$append$1$$anonfun$apply$2.apply(Parsers.scala:254)
> 	at scala.util.parsing.combinator.Parsers$Parser$$anonfun$append$1$$anonfun$apply$2.apply(Parsers.scala:254)
> 	at scala.util.parsing.combinator.Parsers$Failure.append(Parsers.scala:202)
> 	at scala.util.parsing.combinator.Parsers$Parser$$anonfun$append$1.apply(Parsers.scala:254)
> 	at scala.util.parsing.combinator.Parsers$Parser$$anonfun$append$1.apply(Parsers.scala:254)
> 	at scala.util.parsing.combinator.Parsers$$anon$3.apply(Parsers.scala:222)
> 	at scala.util.parsing.combinator.Parsers$$anon$2$$anonfun$apply$14.apply(Parsers.scala:891)
> 	at scala.util.parsing.combinator.Parsers$$anon$2$$anonfun$apply$14.apply(Parsers.scala:891)
> 	at scala.util.DynamicVariable.withValue(DynamicVariable.scala:57)
> 	at scala.util.parsing.combinator.Parsers$$anon$2.apply(Parsers.scala:890)
> 	at scala.util.parsing.combinator.PackratParsers$$anon$1.apply(PackratParsers.scala:110)
> 	at org.apache.spark.sql.catalyst.AbstractSparkSQLParser.apply(AbstractSparkSQLParser.scala:38)
> 	at org.apache.spark.sql.SQLContext$$anonfun$parseSql$1.apply(SQLContext.scala:138)
> 	at org.apache.spark.sql.SQLContext$$anonfun$parseSql$1.apply(SQLContext.scala:138)
> 	at scala.Option.getOrElse(Option.scala:120)
> 	at org.apache.spark.sql.SQLContext.parseSql(SQLContext.scala:138)
> 	at org.apache.spark.sql.SQLContext.sql(SQLContext.scala:933)
> 	at com.ibm.is.drs.engine.spark.sql.task.SQLQueryTask.createTargetRDD(SQLQueryTask.java:106)
> 	at com.ibm.is.drs.engine.spark.sql.SQLQueryNode.createTargetRDD(SQLQueryNode.java:93)
> 	at com.ibm.is.drs.engine.spark.sql.SQLNode.doExecute(SQLNode.java:153)
> 	at com.ibm.is.drs.engine.spark.api.BaseNode.execute(BaseNode.java:291)
> 	at com.ibm.is.drs.engine.spark.api.SessionContext.applyDataShaping(SessionContext.java:840)
> 	at com.ibm.is.drs.engine.spark.api.SessionContext.applyDataShaping(SessionContext.java:752)
> 	at com.ibm.is.drs.engine.spark.api.SparkRefineEngine.applyDataShaping(SparkRefineEngine.java:1011)
> 	... 31 more



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org


Mime
View raw message