sqoop-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Douglas Spadotto <dougspado...@gmail.com>
Subject Re: Importing data to bucketed table in Hive
Date Wed, 28 Mar 2018 21:20:50 GMT
Hi Sowjany,

You can try and import data into delimited files on hdfs and create an
external table that can be used to insert data into the bucketed table.

That is usually a pattern I use to load data into Hive. Only use managed
tables for actual user access.

Regards,

Douglas

On Wednesday, March 28, 2018, Sowjanya Kakarala <sowjanya@agrible.com>
wrote:

> Hi Guys,
>
> I am trying to import the data from my db to Hive using Sqoop.
>
> My usage is to get the data to Hive and later if necessary update the
> records from Hive. For ACID transactions I learnt that I definitely need
> bucketing tables.
>
> Now I created bucketed table and I am trying to import the data using
> Sqoop import where it is throwing a error as follows:
>
> ERROR tool.ImportTool: Encountered IOException running import job:
> org.apache.hive.hcatalog.common.HCatException : 2016 : Error operation
> not supported : *Store into a partition with bucket definition from
> Pig/Mapreduce is not supported*
>
> If it is not supported is there a work around?
>
> I created a partitioned table and import data into that table, can I
> insert these data from partitioned table to created bucketing table? If yes
> can someone provide a *best* example?
>
> I have tried with `insert into` statement for each partition which takes
> so long.
> Because my each year contains “241972735” records.
>
> Also, this might be not a good practice for production environment.
>
> Appreciate your help.
>
> Thanks
> Sowjanya
>

Mime
View raw message