sqoop-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Kathleen Ting <kathl...@cloudera.com>
Subject Re: Sqoop Action error in Oozie
Date Fri, 30 Mar 2012 18:45:29 GMT
Shibu - are you using FairScheduler? If so and since you mention that
the Sqoop import command is successful, you could be hitting your per
user job limit.

Whenever Oozie launches a job, it requires two job submissions (if not
more) - one being the monitor+launcher, and the subsequent ones being
the ones that do the real logic work. The launcher job is something
that will launch the remaining jobs, and hence sticks around until
they have all ended - taking up one running job slot for the whole
lifetime of the Oozie job.

For example, with a per user job limit of 3, if you were to run 3
Oozie jobs, the 3 slots would be filled with launchers first. These
would submit their real jobs next, and those would end up being in a
queue - thereby forming a resource deadlock.

The solution is to channel Oozie launcher hadoop jobs into a dedicated
launcher pool. This pool can have a running job limit too but won't
cause a deadlock because the pools are now separated.

To do this, you need to pass the config property:
"oozie.launcher.<property that specifies your pool>" via WF
<configuration> elements or <job-xml> files to point to the separate
pool.

Regards, Kathleen


On Fri, Mar 30, 2012 at 7:10 AM, Jarek Jarcec Cecho <jarcec@apache.org> wrote:
>
> Hi,
> can you share your workflow.xml file with complete log of sqoop execution (it' can be
retrieved from "launcher" job)? Please add parameter "--verbose" to get more detailed log.
>
> Also can you share your sqoop and oozie version?
>
> Jarcec
>
> On Mar 30, 2012, at 3:31 PM, Shibu Thomas wrote:
>
> > Hi All,
> >
> > We are executing Sqoop action in parallel using fork in Oozie.
> > In the Sqoop actions we are executing  the below command.
> >
> > import --driver com.mysql.jdbc.Driver --connect 'jdbc:mysql://127.0.0.1/bedrock'
--username root --password Pass1234 --table employee --target-dir /user/cloudera/employee
--split-by 'Id' -m 1
> >
> > We have just 3 records in the mysql table.
> >
> > This workflow run for a couple of hours and even though the import is successful
we get the error in Oozie as below.
> > Oozie also reports that the jobs are killed/errored.
> >
> > 2012-03-30 07:18:17,513 INFO org.apache.hadoop.mapred.JobClient: map 0% reduce 0%
> > 2012-03-30 08:55:04,288 WARN org.apache.hadoop.mapred.Task: Parent died. Exiting
attempt_201203210951_0155_m_000000_0
> > 2012-03-30 08:55:04,292 INFO org.apache.hadoop.mapred.Task: Communication exception:
java.lang.SecurityException: Intercepted System.exit(66)
> >
> > Thanks
> >
> > Shibu Thomas
> > MSCIS-IS
> > Office :  +91 (40) 669 32660
> > Mobile: +91 95811 51116
>

Mime
View raw message