spark-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Oldskoola <>
Subject Incremental Updates and custom SQL via JDBC
Date Wed, 24 Aug 2016 23:08:50 GMT

I'm building aggregates over Streaming Data. When new data effects
previously processed aggregates, I'll need to update the effected rows or
delete them before writing the new processed aggregates back to HDFS (Hive
Metastore) and a SAP HANA Table. How would you do this, when writing a
complete dataframe every Interval is not an option?

Somewhat related is the question for custom JDBC SQL for writing to the SAP
HANA DB. How would you implement SAP HANA specific commands if the build in
JDBC df writer is not sufficient for your needs. In this case I primarily
want to to do the incremental updates as described before and maybe also
want to send specific CREATE TABLE syntax for columnar store and time table. 

Thank you very much in advance. I'm a little stuck on this one. 


View this message in context:
Sent from the Apache Spark User List mailing list archive at

To unsubscribe e-mail:

View raw message