spark-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Venkat Subramanian <vsubr...@gmail.com>
Subject Re: Spark SQL JDBC Connectivity and more
Date Mon, 09 Jun 2014 18:25:01 GMT
1) If I have a standalone spark application that has already built a RDD,
how can SharkServer2 or for that matter Shark access 'that' RDD and do
queries on it. All the examples I have seen for Shark, the RDD (tables) are
created within Shark's spark context and processed.

This is not possible out of the box with Shark.  If you look at the code for
SharkServer2 though, you'll see that its just a standard HiveContext under
the covers.  If you modify this startup code, any SchemaRDD you register as
a table in this context will be exposed over JDBC.

[Venkat] Are you saying - pull in the SharkServer2 code in my standalone
spark application (as a part of the standalone application process), pass in
the spark context of the standalone app to SharkServer2 Sparkcontext at
startup and viola we get a SQL/JDBC interfaces for the RDDs   of the
Standalone app that are exposed as tables? Thanks for the clarification.



--
View this message in context: http://apache-spark-user-list.1001560.n3.nabble.com/Spark-SQL-JDBC-Connectivity-tp6511p7264.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.

Mime
View raw message