sqoop-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Vineet Mishra <clearmido...@gmail.com>
Subject Re: Sqoop import job submit mapreduce stuck at 86%
Date Thu, 05 Mar 2015 19:50:32 GMT
Hi Syed,

By default the number of mappers for your job will be 4, which can be
configured by passing the argument with sqoop import something like this,

Sqoop import -driver jdbc. . . -m 10

10 or whatever higher number of mapper slots you have in sync with database
connection support.

The reason why the smaller tables are finishing in time is bcoz of less
data size which is bought over the n/w in no time.

Data retrieval timeout or perhaps connection timeout is the property of the
database which you need to check as per your respective database server.

Try to go to your respective target folder and check if the data files for
those big tables are creating or not.
If yes whether they are increasing with time or not!

Cheers!
On Mar 5, 2015 11:10 PM, "Syed Akram" <akram.basha@zohocorp.com> wrote:

> Hi Vineet,
>
> Basically i'm using sqoop2(1.99.3), iam not setting any num of mappers here
>
> Please suggest me the configurations in sqoop2 , to do larger tables
> import fastly to HDFS.
>
> Setting database server data retrieval timeout : how to set this in sqoop2?
> When i'm running small tables, its finishing quickly, But when i'm doing
> larger tables with millions of rows,
> reaching mapreduce progress to 86% quickly may b less than a minute.
>
>
>
> Need suggestion on MAPReduce configurations
> Since machine configurations are
>
> 16GB RAM
> 500GB Disk
>
> Thanks For Quick Reply
>
> ---- On Thu, 05 Mar 2015 22:58:22 +0530 *Vineet
> Mishra<clearmidoubt@gmail.com <clearmidoubt@gmail.com>>* wrote ----
>
> Hey syed,
>
> Can you mention the number of mappers you are running the job with?
>
> It may be the problem with your database server data retrieval timeout
> limit, try these things. . .
>
> Try to set the timeout limit of your respective database to a higher number
>
> Or
>
> Try to run your job with max mapper possible on your cluster using
> argument -m [number of mapper]
>
> Hope that should help you out.
> On Mar 5, 2015 4:24 PM, "Syed Akram" <akram.basha@zohocorp.com> wrote:
>
>
> Hi,
>
> Iam importing table with more than million rows using sqoop2 1.99.3
>
> After creating job iam submitting it, and then mapreduce job status is
> RUNNING and less than 1min PROGRESS reaches to 86%
> and then stuck for ever, sometimes quit with exception TIMEOUT.
>
> please suggest me, what iam doing wrong, and what are the things i hav to
> check.
>
> Every large table with is stuck up at 86%.
>
> below is the application status
>
> Thanks
> Akram Syed
>
>
>
>
>
>
>
>
> All Applications
>
> Cluster
>
>    - About <http://dfsadmin.zohonoc.com:8088/cluster/cluster>
>    - Nodes <http://dfsadmin.zohonoc.com:8088/cluster/nodes>
>    - Applications <http://dfsadmin.zohonoc.com:8088/cluster/apps>
>       - NEW <http://dfsadmin.zohonoc.com:8088/cluster/apps/NEW>
>       - NEW_SAVING
>       <http://dfsadmin.zohonoc.com:8088/cluster/apps/NEW_SAVING>
>       - SUBMITTED
>       <http://dfsadmin.zohonoc.com:8088/cluster/apps/SUBMITTED>
>       - ACCEPTED <http://dfsadmin.zohonoc.com:8088/cluster/apps/ACCEPTED>
>       - RUNNING <http://dfsadmin.zohonoc.com:8088/cluster/apps/RUNNING>
>       - FINISHED <http://dfsadmin.zohonoc.com:8088/cluster/apps/FINISHED>
>       - FAILED <http://dfsadmin.zohonoc.com:8088/cluster/apps/FAILED>
>       - KILLED <http://dfsadmin.zohonoc.com:8088/cluster/apps/KILLED>
>    - Scheduler <http://dfsadmin.zohonoc.com:8088/cluster/scheduler>
>
> Tools
> Cluster MetricsApps SubmittedApps PendingApps RunningApps CompletedContainers
> RunningMemory UsedMemory TotalMemory ReservedVCores UsedVCores TotalVCores
> ReservedActive NodesDecommissioned NodesLost NodesUnhealthy NodesRebooted
> Nodes101029.84 GB39.41 GB0 B24005
> <http://dfsadmin.zohonoc.com:8088/cluster/nodes>0
> <http://dfsadmin.zohonoc.com:8088/cluster/nodes/decommissioned>0
> <http://dfsadmin.zohonoc.com:8088/cluster/nodes/lost>0
> <http://dfsadmin.zohonoc.com:8088/cluster/nodes/unhealthy>0
> <http://dfsadmin.zohonoc.com:8088/cluster/nodes/rebooted>
> Show 20406080100 entries
> Search:
> ID
> User
> Name
> Application Type
> Queue
> StartTime
> FinishTime
> State
> FinalStatus
> Progress
> Tracking UI
> application_1425550787783_0001
> <http://dfsadmin.zohonoc.com:8088/cluster/app/application_1425550787783_0001>
> sasSqoop: ImportJobMAPREDUCEdefaultThu, 05 Mar 2015 10:20:31 GMTN/ARUNNING
> UNDEFINED
> ApplicationMaster
> <http://172.31.252.206:8088/proxy/application_1425550787783_0001/>
> Showing 1 to 1 of 1 entries
> FirstPrevious1NextLast
>
>
>
>

Mime
View raw message