sqoop-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Kate Ting <k...@cloudera.com>
Subject Re: [sqoop-user] Problem using sqoop with --direct (mysqldump)
Date Fri, 16 Sep 2011 15:55:32 GMT
[Moving the conversation to sqoop-user@incubator.apache.org. Please
subscribe (and post questions) to the new mailing list.]

Hi Eric -

(1) Is the mysqldump utility installed on individual node machines?
(2) If so, can you pastebin your task log as well as verbose output?

Regards, Kate

On Fri, Sep 16, 2011 at 8:04 AM, Eric <eric.hardway@gmail.com> wrote:
> Hi all,
>
> I cannot sqoop in using the --direct option, my sqoop works fine up
> until i add --direct .
>
> I am using Sqoop 1.3.0-cdh3u1
> git commit id 3a60cc809b14d538dd1eb0e90ffa9767e8d06a43
> Compiled by jenkins@ubuntu-slave01 on Mon Jul 18 08:38:49 PDT 2011
>
> Please Advise,
>
> -Eric
>
>
> error message:
>
> 11/09/16 07:57:39 INFO manager.MySQLManager: Preparing to use a MySQL
> streaming resultset.
> 11/09/16 07:57:39 INFO tool.CodeGenTool: Beginning code generation
> 11/09/16 07:57:40 INFO manager.SqlManager: Executing SQL statement:
> SELECT t.* FROM `table1` AS t LIMIT 1
> 11/09/16 07:57:40 INFO manager.SqlManager: Executing SQL statement:
> SELECT t.* FROM `table1` AS t LIMIT 1
> 11/09/16 07:57:40 INFO orm.CompilationManager: HADOOP_HOME is /usr/lib/
> hadoop
> 11/09/16 07:57:40 INFO orm.CompilationManager: Found hadoop core jar
> at: /usr/lib/hadoop/hadoop-0.20.2-cdh3u1-core.jar
> 11/09/16 07:57:41 INFO orm.CompilationManager: Writing jar file: /tmp/
> sqoop-root/compile/aef5c62d2156aeae5338ee272de42d26/table1.jar
> 11/09/16 07:57:41 INFO manager.DirectMySQLManager: Beginning mysqldump
> fast path import
> 11/09/16 07:57:41 INFO mapreduce.ImportJobBase: Beginning import of
> table1
> 11/09/16 07:57:41 INFO manager.SqlManager: Executing SQL statement:
> SELECT t.* FROM `table1` AS t LIMIT 1
> 11/09/16 07:57:43 INFO mapred.JobClient: Running job:
> job_201109160744_0004
> 11/09/16 07:57:44 INFO mapred.JobClient:  map 0% reduce 0%
> 11/09/16 07:57:50 INFO mapred.JobClient: Task Id :
> attempt_201109160744_0004_m_000000_0, Status : FAILED
> java.io.IOException: mysqldump terminated with status 5
>        at
> com.cloudera.sqoop.mapreduce.MySQLDumpMapper.map(MySQLDumpMapper.java:
> 476)
>        at
> com.cloudera.sqoop.mapreduce.MySQLDumpMapper.map(MySQLDumpMapper.java:
> 49)
>        at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:144)
>        at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:647)
>        at org.apache.hadoop.mapred.MapTask.run(MapTask.java:323)
>        at org.apache.hadoop.mapred.Child$4.run(Child.java:270)
>        at java.security.AccessController.doPrivileged(Native Method)
>        at javax.security.auth.Subject.doAs(Subject.java:396)
>        at
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:
> 1127)
>        at org.apache.hadoop.mapred.Child.main(Child.java:264)
>
> attempt_201109160744_0004_m_000000_0: Exception in thread "Thread-12"
> java.lang.IndexOutOfBoundsException
> attempt_201109160744_0004_m_000000_0:   at
> java.nio.CharBuffer.wrap(CharBuffer.java:445)
> attempt_201109160744_0004_m_000000_0:   at
> com.cloudera.sqoop.mapreduce.MySQLDumpMapper$ReparsingAsyncSink
> $ReparsingStreamThread.run(MySQLDumpMapper.java:253)
> attempt_201109160744_0004_m_000000_0: log4j:WARN No appenders could be
> found for logger (org.apache.hadoop.hdfs.DFSClient).
> attempt_201109160744_0004_m_000000_0: log4j:WARN Please initialize the
> log4j system properly.
> 11/09/16 07:57:55 INFO mapred.JobClient: Task Id :
> attempt_201109160744_0004_m_000000_1, Status : FAILED
> java.io.IOException: mysqldump terminated with status 5
>        at
> com.cloudera.sqoop.mapreduce.MySQLDumpMapper.map(MySQLDumpMapper.java:
> 476)
>        at
> com.cloudera.sqoop.mapreduce.MySQLDumpMapper.map(MySQLDumpMapper.java:
> 49)
>        at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:144)
>        at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:647)
>        at org.apache.hadoop.mapred.MapTask.run(MapTask.java:323)
>        at org.apache.hadoop.mapred.Child$4.run(Child.java:270)
>        at java.security.AccessController.doPrivileged(Native Method)
>        at javax.security.auth.Subject.doAs(Subject.java:396)
>        at
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:
> 1127)
>        at org.apache.hadoop.mapred.Child.main(Child.java:264)
>
> attempt_201109160744_0004_m_000000_1: Exception in thread "Thread-12"
> java.lang.IndexOutOfBoundsException
> attempt_201109160744_0004_m_000000_1:   at
> java.nio.CharBuffer.wrap(CharBuffer.java:445)
> attempt_201109160744_0004_m_000000_1:   at
> com.cloudera.sqoop.mapreduce.MySQLDumpMapper$ReparsingAsyncSink
> $ReparsingStreamThread.run(MySQLDumpMapper.java:253)
> attempt_201109160744_0004_m_000000_1: log4j:WARN No appenders could be
> found for logger (org.apache.hadoop.hdfs.DFSClient).
> attempt_201109160744_0004_m_000000_1: log4j:WARN Please initialize the
> log4j system properly.
> 11/09/16 07:58:01 INFO mapred.JobClient: Task Id :
> attempt_201109160744_0004_m_000000_2, Status : FAILED
> java.io.IOException: mysqldump terminated with status 5
>        at
> com.cloudera.sqoop.mapreduce.MySQLDumpMapper.map(MySQLDumpMapper.java:
> 476)
>        at
> com.cloudera.sqoop.mapreduce.MySQLDumpMapper.map(MySQLDumpMapper.java:
> 49)
>        at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:144)
>        at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:647)
>        at org.apache.hadoop.mapred.MapTask.run(MapTask.java:323)
>        at org.apache.hadoop.mapred.Child$4.run(Child.java:270)
>        at java.security.AccessController.doPrivileged(Native Method)
>        at javax.security.auth.Subject.doAs(Subject.java:396)
>        at
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:
> 1127)
>        at org.apache.hadoop.mapred.Child.main(Child.java:264)
>
> attempt_201109160744_0004_m_000000_2: Exception in thread "Thread-12"
> java.lang.IndexOutOfBoundsException
> attempt_201109160744_0004_m_000000_2:   at
> java.nio.CharBuffer.wrap(CharBuffer.java:445)
> attempt_201109160744_0004_m_000000_2:   at
> com.cloudera.sqoop.mapreduce.MySQLDumpMapper$ReparsingAsyncSink
> $ReparsingStreamThread.run(MySQLDumpMapper.java:253)
> attempt_201109160744_0004_m_000000_2: log4j:WARN No appenders could be
> found for logger (org.apache.hadoop.hdfs.DFSClient).
> attempt_201109160744_0004_m_000000_2: log4j:WARN Please initialize the
> log4j system properly.
> 11/09/16 07:58:07 INFO mapred.JobClient: Job complete:
> job_201109160744_0004
> 11/09/16 07:58:07 INFO mapred.JobClient: Counters: 6
> 11/09/16 07:58:07 INFO mapred.JobClient:   Job Counters
> 11/09/16 07:58:07 INFO mapred.JobClient:     SLOTS_MILLIS_MAPS=19165
> 11/09/16 07:58:07 INFO mapred.JobClient:     Total time spent by all
> reduces waiting after reserving slots (ms)=0
> 11/09/16 07:58:07 INFO mapred.JobClient:     Total time spent by all
> maps waiting after reserving slots (ms)=0
> 11/09/16 07:58:07 INFO mapred.JobClient:     Launched map tasks=4
> 11/09/16 07:58:07 INFO mapred.JobClient:     SLOTS_MILLIS_REDUCES=0
> 11/09/16 07:58:07 INFO mapred.JobClient:     Failed map tasks=1
> 11/09/16 07:58:07 INFO mapreduce.ImportJobBase: Transferred 0 bytes in
> 25.1844 seconds (0 bytes/sec)
> 11/09/16 07:58:07 INFO mapreduce.ImportJobBase: Retrieved 0 records.
> 11/09/16 07:58:07 ERROR tool.ImportTool: Error during import: Import
> job failed!
>
> --
> NOTE: The mailing list sqoop-user@cloudera.org is deprecated in favor of Apache Sqoop
mailing list sqoop-user@incubator.apache.org. Please subscribe to it by sending an email to
incubator-sqoop-user-subscribe@apache.org.
>

Mime
View raw message