spark-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From infa elance <>
Subject Spark Newbie question
Date Thu, 11 Jul 2019 17:19:55 GMT
This is stand-alone spark cluster. My understanding is spark is an
execution engine and not a storage layer.
Spark processes data in memory but when someone refers to a spark table
created through sparksql(df/rdd) what exactly are they referring to?

Could it be a Hive table? If yes, is it the same hive store that spark uses?
Is it a table in memory? If yes, how can an external app

Spark version with hadoop : spark-2.0.2-bin-hadoop2.7

Thanks and appreciate your help!!

View raw message