spark-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Takeshi Yamamuro (Jira)" <j...@apache.org>
Subject [jira] [Resolved] (SPARK-30828) Improve insertInto behaviour
Date Tue, 12 May 2020 10:21:00 GMT

     [ https://issues.apache.org/jira/browse/SPARK-30828?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]

Takeshi Yamamuro resolved SPARK-30828.
--------------------------------------
    Resolution: Won't Fix

> Improve insertInto behaviour
> ----------------------------
>
>                 Key: SPARK-30828
>                 URL: https://issues.apache.org/jira/browse/SPARK-30828
>             Project: Spark
>          Issue Type: Improvement
>          Components: Spark Core, SQL
>    Affects Versions: 3.1.0
>            Reporter: German Schiavon Matteo
>            Assignee: Apache Spark
>            Priority: Minor
>
> Actually when you call *_insertInto_* to add a dataFrame into an existing table the
only safety check is that the number of columns match, but the order doesn't matter, and the
message in case that the number of columns doesn't match is not very helpful, specially when
you have  a lot of columns:
> {code:java}
>  org.apache.spark.sql.AnalysisException: `default`.`table` requires that the data to
be inserted have the same number of columns as the target table: target table has 2 column(s)
but the inserted data has 1 column(s), including 0 partition column(s) having constant value(s).; {code}
> I think a standard column check would be very helpful, just like in almost other cases
with Spark:
>  
> {code:java}
> "cannot resolve 'p2' given input columns: [id, p1];"  
> {code}
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org


Mime
View raw message