sqoop-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Gwen Shapira <gshap...@cloudera.com>
Subject Re: sqoop import to S3 hits 5 GB limit
Date Sun, 03 Aug 2014 19:07:10 GMT
Hi,

Sqoop2 is rather experimental and will not solve your problem.

I'd try to work-around the issue by increasing number of mappers until
each mapper is writing less than 5GB worth of data.

If this doesn't work for you, then HDFS->S3 is an option.

Gwen

On Thu, Jul 31, 2014 at 2:32 PM, Allan Ortiz <aortiz@g2llc.com> wrote:
> I am trying to use sqoop 1.4.4 to import data from a mysql DB directly to S3
> and I am running into an issue where if one of the file splits is larger
> than 5 GB then the import fails.
>
> Details for this question are listed here in my SO post - I promise to
> follow good cross-posting etiquette :)
> http://stackoverflow.com/questions/25068747/sqoop-import-to-s3-hits-5-gb-limit
>
> One of my main questions is should I be using sqoop 2 rather than sqoop
> 1.4.4?  Also, should I be sqooping to HDFS, then copying the data over to S3
> for permanent storage?  Thanks!
>
>

Mime
View raw message