spark-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "ABHISHEK KUMAR GUPTA (JIRA)" <j...@apache.org>
Subject [jira] [Created] (SPARK-24099) java.io.CharConversionException: Invalid UTF-32 character prevents me from querying my data in JSON
Date Thu, 26 Apr 2018 10:13:00 GMT
ABHISHEK KUMAR GUPTA created SPARK-24099:
--------------------------------------------

             Summary: java.io.CharConversionException: Invalid UTF-32 character prevents me
from querying my data in JSON
                 Key: SPARK-24099
                 URL: https://issues.apache.org/jira/browse/SPARK-24099
             Project: Spark
          Issue Type: Bug
          Components: SQL
    Affects Versions: 2.3.0
         Environment: OS: SUSE 11

Spark Version: 2.3

 
            Reporter: ABHISHEK KUMAR GUPTA


Steps:
 # Launch spark-sql --master yarn
 # create table json(name STRING, age int, gender string, id INT) using org.apache.spark.sql.json
options(path "hdfs:///user/testdemo/");
 # Execute the below SQL queries 
INSERT into json
SELECT 'Shaan',21,'Male',1
UNION ALL
SELECT 'Xing',20,'Female',11
UNION ALL
SELECT 'Mile',4,'Female',20
UNION ALL
SELECT 'Malan',10,'Male',9;
 # Select * from json;

Throws below Exception

Caused by: *java.io.CharConversionException: Invalid UTF-32 character* 0x151a15(above 10ffff)
at char #1, byte #7)
 at com.fasterxml.jackson.core.io.UTF32Reader.reportInvalid(UTF32Reader.java:189)
 at com.fasterxml.jackson.core.io.UTF32Reader.read(UTF32Reader.java:150)
 at com.fasterxml.jackson.core.json.ReaderBasedJsonParser.loadMore(ReaderBasedJsonParser.java:153)
 at com.fasterxml.jackson.core.json.ReaderBasedJsonParser._skipWSOrEnd(ReaderBasedJsonParser.java:2017)
 at com.fasterxml.jackson.core.json.ReaderBasedJsonParser.nextToken(ReaderBasedJsonParser.java:577)
 at org.apache.spark.sql.catalyst.json.JacksonParser$$anonfun$parse$2.apply(JacksonParser.scala:350)
 at org.apache.spark.sql.catalyst.json.JacksonParser$$anonfun$parse$2.apply(JacksonParser.scala:347)
 at org.apache.spark.util.Utils$.tryWithResource(Utils.scala:2585)
 at org.apache.spark.sql.catalyst.json.JacksonParser.parse(JacksonParser.scala:347)
 at org.apache.spark.sql.execution.datasources.json.TextInputJsonDataSource$$anonfun$3.apply(JsonDataSource.scala:126)
 at org.apache.spark.sql.execution.datasources.json.TextInputJsonDataSource$$anonfun$3.apply(JsonDataSource.scala:126)
 at org.apache.spark.sql.execution.datasources.FailureSafeParser.parse(FailureSafeParser.scala:61)
 at org.apache.spark.sql.execution.datasources.json.TextInputJsonDataSource$$anonfun$readFile$2.apply(JsonDataSource.scala:130)
 at org.apache.spark.sql.execution.datasources.json.TextInputJsonDataSource$$anonfun$readFile$2.apply(JsonDataSource.scala:130)
 at scala.collection.Iterator$$anon$12.nextCur(Iterator.scala:434)
 at scala.collection.Iterator$$anon$12.hasNext(Iterator.scala:440)
 at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:408)
 at org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.hasNext(FileScanRDD.scala:106)
 at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.processNext(Unknown
Source)
 at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43)
 at org.apache.spark.sql.execution.WholeStageCodegenExec$$anonfun$10$$anon$1.hasNext(WholeStageCodegenExec.scala:614)
 at org.apache.spark.sql.execution.SparkPlan$$anonfun$2.apply(SparkPlan.scala:253)
 at org.apache.spark.sql.execution.SparkPlan$$anonfun$2.apply(SparkPlan.scala:247)
 at org.apache.spark.rdd.RDD$$anonfun$mapPartitionsInternal$1$$anonfun$apply$25.apply(RDD.scala:830)
 at org.apache.spark.rdd.RDD$$anonfun$mapPartitionsInternal$1$$anonfun$apply$25.apply(RDD.scala:830)
 at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
 at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
 at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
 at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87)
 at org.apache.spark.scheduler.Task.run(Task.scala:109)
 at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:345)
 at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
 at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
 at java.lang.Thread.run(Thread.java:745)

 

Note:

https://issues.apache.org/jira/browse/SPARK-16548 Jira raised in 1.6 and said Fixed in 2.3
but still I am getting same Error.

Please update on this.

 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org


Mime
View raw message