sqoop-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Joanne Chan <jc...@shutterstock.com>
Subject Re: Joins in Sqoop
Date Thu, 16 Jan 2014 15:28:15 GMT
You can try concat_ws(' ', map_keys(UNION_MAP(MAP(your_column, 'dummy'))))
as mentioned in https://issues.apache.org/jira/browse/HIVE-707



On Thu, Jan 16, 2014 at 9:13 AM, Sambit Tripathy (RBEI/PJ-NBS) <
Sambit.Tripathy@in.bosch.com> wrote:

> Hi Chalcy,
>
>
>
> I am using the group_concat function in my query and that actually puts
> all columns in the memory and I am afraid Hive does not have this feature.
>
>
>
>
>
> Regards,
>
> Sambit.
>
>
>
> *From:* Chalcy [mailto:chalcy@gmail.com]
> *Sent:* Thursday, January 16, 2014 7:19 PM
> *To:* user@sqoop.apache.org
>
> *Subject:* Re: Joins in Sqoop
>
>
>
> Hi Sambit,
>
>
>
> I would import all the relevant tables into hive and then do the join
> there if you have enough space in the hadoop cluster.
>
>
>
> Hope this helps,
>
> Chalcy
>
>
>
> On Thu, Jan 16, 2014 at 8:20 AM, Sambit Tripathy (RBEI/PJ-NBS) <
> Sambit.Tripathy@in.bosch.com> wrote:
>
> Hi,
>
>
>
> I have written query which has 5 Join clauses and I am passing this query
> in Sqoop import.
>
>
>
> Problem: This produces a large temp file in the MySQL server temp
> directory and throws back an error saying No Space left on the device. Yes
> this can be fixed by increasing the size of the temp directory in the MySQL
> server, but what if you actually don’t have any space left on MySQL server.
> Are there any workarounds for this? I mean something like a batch import
> which does not create a big temp file in the server.
>
>
>
>
>
> Regards,
>
> Sambit.
>
>
>
>
>



-- 
-- JChan

Mime
View raw message