spark-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Jörn Franke <jornfra...@gmail.com>
Subject Re: Spark scala/Hive scenario
Date Wed, 07 Aug 2019 19:32:23 GMT
You can use the map datatype on the Hive table for the columns that are uncertain:
https://cwiki.apache.org/confluence/display/Hive/LanguageManual+Types#LanguageManualTypes-ComplexTypes

However, maybe you can share more concrete details, because there could be also other solutions.

> Am 07.08.2019 um 20:40 schrieb anbutech <anbutech17@outlook.com>:
> 
> Hi All,
> 
> I have a scenario in (Spark scala/Hive):
> 
> Day 1:
> 
> i have a file with 5 columns which needs to be processed and loaded into
> hive tables.
> day2:
> 
> Next day the same feeds(file) has 8 columns(additional fields) which needs
> to be processed and loaded into hive tables
> 
> How do we approach this problem without changing the target table schema.Is
> there any way we can achieve this.
> 
> Thanks
> Anbu
> 
> 
> 
> --
> Sent from: http://apache-spark-user-list.1001560.n3.nabble.com/
> 
> ---------------------------------------------------------------------
> To unsubscribe e-mail: user-unsubscribe@spark.apache.org
> 

Mime
View raw message