We faced similar issue so we write the file and then use sqoop to export data to mssql.
We achieved a great time benefit with this strategy.
Sent from my iPhone
i was facing the very same issue ,the solution is write to file and using oracle external table to do the insert.
hope this could help.
What is the size of the data? How much time does it need on HDFS and how much on Oracle? How many partitions do you have on Oracle side?
My spark job writes into oracle db using:
.option("driver", driver).option("user", user)
.option("password", password).option("dbtable", tableName).mode("append").save()
It is much slow than writting into HDFS. The data to write is small.
Is this expected? Thanks for any clue.