Hi Henry,

I didn’t catch your email until now. When you wrote to the database, how did you enforce the schema? Did the data frames just spit everything out with the necessary keys?

 

Adaryl "Bob" Wakefield, MBA
Principal
Mass Street Analytics, LLC
913.938.6685

www.massstreet.net

www.linkedin.com/in/bobwakefieldmba
Twitter: @BobLovesData

 

From: Henry Tremblay [mailto:paulhtremblay@gmail.com]
Sent: Tuesday, February 28, 2017 3:56 PM
To: user@spark.apache.org
Subject: Re: using spark to load a data warehouse in real time

 

We did this all the time at my last position.

1. We had unstructured data in S3.

2.We read directly from S3 and then gave structure to the data by a dataframe in Spark.

3. We wrote the results to S3

4. We used Redshift's super fast parallel ability to load the results into a table.

Henry

 

On 02/28/2017 11:04 AM, Mohammad Tariq wrote:

You could try this as a blueprint :

 

Read the data in through Spark Streaming. Iterate over it and convert each RDD into a DataFrame. Use these DataFrames to perform whatever processing is required and then save that DataFrame into your target relational warehouse.

 

HTH

 

 

Tariq, Mohammad

about.me/mti

 


 

On Wed, Mar 1, 2017 at 12:27 AM, Mohammad Tariq <dontariq@gmail.com> wrote:


 

On Wed, Mar 1, 2017 at 12:15 AM, Adaryl Wakefield <adaryl.wakefield@hotmail.com> wrote:

I haven’t heard of Kafka connect. I’ll have to look into it. Kafka would, of course have to be in any architecture but it looks like they are suggesting that Kafka is all you need.

 

My primary concern is the complexity of loading warehouses. I have a web development background so I have somewhat of an idea on how to insert data into a database from an application. I’ve since moved on to straight database programming and don’t work with anything that reads from an app anymore.

 

Loading a warehouse requires a lot of cleaning of data and running and grabbing keys to maintain referential integrity. Usually that’s done in a batch process. Now I have to do it record by record (or a few records). I have some ideas but I’m not quite there yet.

 

I thought SparkSQL would be the way to get this done but so far, all the examples I’ve seen are just SELECT statements, no INSERTS or MERGE statements.

 

 

 

 

 

 

 

 

 

 

 

Is anybody using Spark streaming/SQL to load a relational data warehouse in real time? There isn’t a lot of information on this use case out there. When I google real time data warehouse load, nothing I find is up to date. It’s all turn of the century stuff and doesn’t take into account advancements in database technology. Additionally, whenever I try to learn spark, it’s always the same thing. Play with twitter data never structured data. All the CEP uses cases are about data science.

 

I’d like to use Spark to load Greenplumb in real time. Intuitively, this should be possible. I was thinking Spark Streaming with Spark SQL along with a ORM should do it. Am I off base with this? Is the reason why there are no examples is because there is a better way to do what I want?

 

 

 

 



-- 
Henry Tremblay
Robert Half Technology