spark-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From anbutech <anbutec...@outlook.com>
Subject Spark scala/Hive scenario
Date Wed, 07 Aug 2019 18:40:34 GMT
Hi All,

I have a scenario in (Spark scala/Hive):

Day 1:

i have a file with 5 columns which needs to be processed and loaded into
hive tables.
day2:

Next day the same feeds(file) has 8 columns(additional fields) which needs
to be processed and loaded into hive tables

How do we approach this problem without changing the target table schema.Is
there any way we can achieve this.

Thanks
Anbu



--
Sent from: http://apache-spark-user-list.1001560.n3.nabble.com/

---------------------------------------------------------------------
To unsubscribe e-mail: user-unsubscribe@spark.apache.org


Mime
View raw message