[ https://issues.apache.org/jira/browse/SPARK-25198?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16589547#comment-16589547
]
Liang-Chi Hsieh commented on SPARK-25198:
-----------------------------------------
I think the {{customSchema}} here refers to Spark's Catalyst SQL datatypes, not Postgres datatypes.
For Postgres json column, string type column should be mapped to it. Doesn't it work?
> org.apache.spark.sql.catalyst.parser.ParseException: DataType json is not supported.
> ------------------------------------------------------------------------------------
>
> Key: SPARK-25198
> URL: https://issues.apache.org/jira/browse/SPARK-25198
> Project: Spark
> Issue Type: Bug
> Components: SQL
> Affects Versions: 2.3.1
> Environment: Ubuntu 18.04, Spark 2.3.1, org.postgresql:postgresql:42.2.4
> Reporter: antonkulaga
> Priority: Major
>
> Whenever I try to save the dataframe with one of the columns with JSON string inside
to the latest Postgres I get org.apache.spark.sql.catalyst.parser.ParseException: DataType
json is not supported. As Postgres supports JSON well and I use the latest postgresql client
I expect it to work. Here is an example of the code that crashes
> val columnTypes = """id integer, parameters json, title text, gsm text, gse text, organism
text, characteristics text, molecule text, model text, description text, treatment_protocol
text, extract_protocol text, source_name text,data_processing text, submission_date text,last_update_date
text, status text, type text, contact text, gpl text"""
> myDataframe.write.format("jdbc").option("url", "jdbc:postgresql://db/sequencing").option("customSchema",
columnTypes).option("dbtable", "test").option("user", "postgres").option("password", "changeme").save()
--
This message was sent by Atlassian JIRA
(v7.6.3#76005)
---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org
|