sqoop-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Sambit Tripathy (RBEI/PJ-NBS)" <Sambit.Tripa...@in.bosch.com>
Subject RE: Joins in Sqoop
Date Thu, 16 Jan 2014 14:13:39 GMT
Hi Chalcy,

I am using the group_concat function in my query and that actually puts all columns in the
memory and I am afraid Hive does not have this feature.


Regards,
Sambit.

From: Chalcy [mailto:chalcy@gmail.com]
Sent: Thursday, January 16, 2014 7:19 PM
To: user@sqoop.apache.org
Subject: Re: Joins in Sqoop

Hi Sambit,

I would import all the relevant tables into hive and then do the join there if you have enough
space in the hadoop cluster.

Hope this helps,
Chalcy

On Thu, Jan 16, 2014 at 8:20 AM, Sambit Tripathy (RBEI/PJ-NBS) <Sambit.Tripathy@in.bosch.com<mailto:Sambit.Tripathy@in.bosch.com>>
wrote:
Hi,

I have written query which has 5 Join clauses and I am passing this query in Sqoop import.

Problem: This produces a large temp file in the MySQL server temp directory and throws back
an error saying No Space left on the device. Yes this can be fixed by increasing the size
of the temp directory in the MySQL server, but what if you actually don't have any space left
on MySQL server. Are there any workarounds for this? I mean something like a batch import
which does not create a big temp file in the server.


Regards,
Sambit.



Mime
View raw message