sqoop-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Christian Prokopp <christ...@rangespan.com>
Subject Re: /tmp dir for import configurable?
Date Thu, 28 Mar 2013 15:54:11 GMT
Thanks for the idea Alex. I considered this but that would mean I have to
change my cluster setup for sqoop (last resort solution). I'd very much
rather point sqoop to existing large disks.

Cheers,
Christian


On Thu, Mar 28, 2013 at 3:50 PM, Alexander Alten-Lorenz <wget.null@gmail.com
> wrote:

> You could mount a bigger disk into /tmp - or symlink /tmp to another
> directory which have enough space.
>
> Best
> - Alex
>
> On Mar 28, 2013, at 4:35 PM, Christian Prokopp <christian@rangespan.com>
> wrote:
>
> > Hi,
> >
> > I am using sqoop to copy data from MySQL to S3:
> >
> > (Sqoop 1.4.2-cdh4.2.0)
> > $ sqoop import --connect jdbc:mysql://server:port/db --username user
> --password pass  --table tablename --target-dir s3n://xyz@somehwere/a/b/c
> --fields-terminated-by='\001' -m 1 --direct
> >
> > My problem is that sqoop temporarily stores the data on /tmp, which is
> not big enough for the data. I am unable to find a configuration option to
> point sqoop to a bigger partition/disk. Any suggestions?
> >
> > Cheers,
> > Christian
> >
>
> --
> Alexander Alten-Lorenz
> http://mapredit.blogspot.com
> German Hadoop LinkedIn Group: http://goo.gl/N8pCF
>
>


-- 
Best regards,

*Christian Prokopp*
Data Scientist, PhD
Rangespan Ltd. <http://www.rangespan.com/>

Mime
View raw message