spark-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Andrés Ivaldi <iaiva...@gmail.com>
Subject Re: Insert into JDBC
Date Thu, 26 May 2016 22:43:13 GMT
I see, I'm using Spark 1.6.0 and that change seems to be for 2.0 or maybe
it's in 1.6.1 looking at the history.
thanks I'll see if update spark  to 1.6.1

On Thu, May 26, 2016 at 3:33 PM, Anthony May <anthonymay@gmail.com> wrote:

> It doesn't appear to be configurable, but it is inserting by column name:
>
> https://github.com/apache/spark/blob/master/sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/jdbc/JdbcUtils.scala#L102
>
> On Thu, 26 May 2016 at 16:02 Andrés Ivaldi <iaivaldi@gmail.com> wrote:
>
>> Hello,
>>  I'realize that when dataframe executes insert it is inserting by scheme
>> order column instead by name, ie
>>
>> dataframe.write(SaveMode).jdbc(url, table, properties)
>>
>> Reading the profiler the execution is
>>
>> insert into TableName values(a,b,c..)
>>
>> what i need is
>> insert into TableNames (colA,colB,colC) values(a,b,c)
>>
>> could be some configuration?
>>
>> regards.
>>
>> --
>> Ing. Ivaldi Andres
>>
>


-- 
Ing. Ivaldi Andres

Mime
View raw message