sqoop-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Jarek Jarcec Cecho <jar...@apache.org>
Subject Re: Need help and tips for tthe following issue: No data get exported from hadoop to mysql using sqoop.
Date Wed, 10 Oct 2012 23:58:32 GMT
Hi sir,
I have actually zero experience with amazon services, so I'm afraid that I can't much help
you navigate to the map tasks logs. Usually on normal hadoop cluster, there is service call
"Job Tracker" that is serving as central place for mapreduce jobs. I'm expecting that you
should be able to find this webservice or something similar somehow somewhere. You should
see job executed by hadoop there and you also should be able to get to individual task logs.


Following my previous blind shoot - How is defined MySQL user that you're using for Sqoop?
I'm very interested to know the host part of the user. For example usually there are users
like root@localhost or jarcec@'%'. If your host part (in my examples it's localhost or '%')
is restrictive enough your hadoop nodes might not be capable of connecting to that MySQL box
and thus resulting in connection failures.

Jarcec

On Wed, Oct 10, 2012 at 05:22:14PM -0400, Matthieu Labour wrote:
> Hi Jarcek
> If i use the postgresql jdbc connector and connect to one of our heroku
> machine then scoop works
> ~/$SQOOP_ROOT/bin/sqoop export --connect
> jdbc:postgresql://ec2-XX-XX-XXX-XX.compute-1.amazonaws.com:database
> --username username --password password --table ml_ys_log_gmt_test
> --export-dir -export-dir
> =hdfs:///mnt/var/lib/hadoop/dfs/logs_sanitized_test/dt=2012-10-01
> --input-fields-terminated-by='\t'
> --lines-terminated-by='\n' --verbose --batch
> 
> On Wed, Oct 10, 2012 at 2:06 PM, Matthieu Labour <matthieu@actionx.com>wrote:
> 
> >
> > Jarcek
> >
> > I am quite new to hadoop and amazon EMR. Where are those files located?
> >
> > Here is what I am doing:
> >
> > 1) I am using amazon elastic map reduce and I have created a New Job that
> > does not terminate and whose type is HBase
> >
> > 2) I get the job id
> > myaccount@ubuntu:~/elastic-mapreduce-cli$ ./elastic-mapreduce --list
> > --active
> > j-3EFP15LBJC8R4     RUNNING
> > ec2-XXX-XX-XXX-XX.compute-1.amazonaws.com         sqooping
> >    COMPLETED      Setup Hadoop Debugging
> >    COMPLETED      Start HBase
> >    COMPLETED      Setup Hive
> >    RUNNING        Setup Pig
> >
> > 3) I attach and run a step:
> > ./elastic-mapreduce -j j-3EFP15LBJC8R4 --jar
> > s3://elasticmapreduce/libs/script-runner/script-runner.jar --arg
> > s3://mybucket/sqoop/sqoop.sh
> >
> > 4) I ssh the machine. ssh -i ~/.ec2/MYKEY.pem
> > hadoop@ec2-XXX-XX-XXX-XX.compute-1.amazonaws.com
> >
> > 5) tail -f /mnt/var/lib/hadoop/steps/6/stderr shows the mapreduce job
> > hanging
> > 12/10/10 17:46:58 DEBUG mapreduce.ExportInputFormat: Generated splits:
> > 12/10/10 17:46:58 DEBUG mapreduce.ExportInputFormat:
> > Paths:/mnt/var/lib/hadoop/dfs/logs_sanitized_test/dt=2012-10-01/part-m-00000:0+52
> > Locations:ip-10-77-70-192.ec2.internal:;
> > 12/10/10 17:46:58 INFO mapred.JobClient: Running job: job_201210101503_0024
> > 12/10/10 17:46:59 INFO mapred.JobClient:  map 0% reduce 0%
> >
> > 6) In /mnt/var/lib/hadoop/steps/6 there is the scoop.sh script file with
> > ~/sqoop-1.4.2.bin__hadoop-1.0.0/bin/sqoop export --connect
> > jdbc:mysql://hostname:3306/analyticsdb --username username --password
> > password --table ml_ys_log_gmt_test --export-dir
> > =hdfs:///mnt/var/lib/hadoop/dfs/logs_sanitized_test/dt=2012-10-01
> > --input-fields-termi
> > nated-by='\t' --lines-terminated-by='\n' --verbose --batch
> >
> > On that same machine, same location ( /mnt/var/lib/hadoop/steps/6), the
> > following command works
> > mysql -h hostname -P 3306 -u username -p
> > password: password
> > Afterwards I can use the database, describe the table etc ....
> > Please note the mysql machine is running on Amazon RDS and I have
> > added ElasticMapReduce-master security group to RDS
> >
> > Thank you for your help
> >
> >
> > On Wed, Oct 10, 2012 at 1:27 PM, Jarek Jarcec Cecho <jarcec@apache.org>wrote:
> >
> >> It would be very helpful if you could send us task log from one map job
> >> that Sqoop executes.
> >>
> >> Blindly shooting - Sqoop is connecting to your database from map tasks.
> >> Based on the connection issues - are you sure that you can connect to your
> >> database from all nodes in your cluster?
> >>
> >> Jarcec
> >>
> >> On Wed, Oct 10, 2012 at 01:16:03PM -0400, Matthieu Labour wrote:
> >> > Hi Jerek
> >> >
> >> > Thank you so much for your help.
> >> >
> >> > Following your advice, I run the following command:
> >> > ~/sqoop-1.4.2.bin__hadoop-1.0.0/bin/sqoop export --connect
> >> > jdbc:mysql://hostname:3306/analyticsdb --username username --password
> >> > password --table ml_ys_log_gmt_test --export-dir
> >> > hdfs:///mnt/var/lib/hadoop/dfs/logs_sanitized_test/dt=2012-10-01
> >> > --input-fields-terminated-by='\t' --lines-terminated-by='\n' --verbose
> >> >
> >> > It seems to find the file to export. So that is good. In the log I see
> >> the
> >> > following: (I am not sure why :0+52 gets appended)
> >> > 2/10/10 16:43:41 DEBUG mapreduce.ExportInputFormat:
> >> >
> >> Paths:/mnt/var/lib/hadoop/dfs/logs_sanitized_test/dt=2012-10-01/part-m-00000:0+52
> >> > Locations:ip-XX-XX-XX-XXX.ec2.internal:;
> >> >
> >> > However it hangs forever after it printed the following:
> >> > 12/10/10 16:43:42 INFO mapred.JobClient:  map 0% reduce 0%
> >> >
> >> > Then It seems the JDBC connection is eventually timing out.
> >> > 12/10/10 16:47:07 INFO mapred.JobClient: Task Id :
> >> > attempt_201210101503_0019_m_000000_0, Status : FAILED
> >> >
> >> > Here is the log towards the end:
> >> >
> >> > 12/10/10 16:43:40 INFO mapred.JobClient: Default number of map tasks: 4
> >> > 12/10/10 16:43:40 INFO mapred.JobClient: Default number of reduce
> >> tasks: 0
> >> > 12/10/10 16:43:41 INFO mapred.JobClient: Setting group to hadoop
> >> > 12/10/10 16:43:41 INFO input.FileInputFormat: Total input paths to
> >> process
> >> > : 1
> >> > 12/10/10 16:43:41 DEBUG mapreduce.ExportInputFormat: Target
> >> numMapTasks=4
> >> > 12/10/10 16:43:41 DEBUG mapreduce.ExportInputFormat: Total input
> >> bytes=52
> >> > 12/10/10 16:43:41 DEBUG mapreduce.ExportInputFormat: maxSplitSize=13
> >> > 12/10/10 16:43:41 INFO input.FileInputFormat: Total input paths to
> >> process
> >> > : 1
> >> > 12/10/10 16:43:41 DEBUG mapreduce.ExportInputFormat: Generated splits:
> >> > 12/10/10 16:43:41 DEBUG mapreduce.ExportInputFormat:
> >> >
> >> Paths:/mnt/var/lib/hadoop/dfs/logs_sanitized_test/dt=2012-10-01/part-m-00000:0+52
> >> > Locations:ip-XX-XX-XX-XXX.ec2.internal:;
> >> > 12/10/10 16:43:41 INFO mapred.JobClient: Running job:
> >> job_201210101503_0019
> >> > 12/10/10 16:43:42 INFO mapred.JobClient:  map 0% reduce 0%
> >> > 12/10/10 16:47:07 INFO mapred.JobClient: Task Id :
> >> > attempt_201210101503_0019_m_000000_0, Status : FAILED
> >> > java.io.IOException:
> >> > com.mysql.jdbc.exceptions.jdbc4.CommunicationsException: Communications
> >> > link failure
> >> >
> >> > The last packet sent successfully to the server was 0 milliseconds ago.
> >> The
> >> > driver has not received any packets from the server.
> >> >         at
> >> >
> >> org.apache.sqoop.mapreduce.ExportOutputFormat.getRecordWriter(ExportOutputFormat.java:79)
> >> >         at
> >> >
> >> org.apache.hadoop.mapred.MapTask$NewDirectOutputCollector.<init>(MapTask.java:635)
> >> >         at
> >> org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:760)
> >> >         at org.apache.hadoop.mapred.MapTask.run(MapTask.java:375)
> >> >         at org.apache.hadoop.mapred.Child$4.run(Child.java:255)
> >> >         at java.security.AccessController.doPrivileged(Native Method)
> >> >         at javax.security.auth.Subject.doAs(Subject.java:396)
> >> >         at
> >> >
> >> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1132)
> >> >         at org.apache.hadoop.mapred.Child.main(Child.java:249)
> >> > Caused by: com.mysql.jdbc.exceptions.jdbc4.CommunicationsException:
> >> > Communications link failure
> >> >
> >> >
> >> >
> >> >
> >> > On Wed, Oct 10, 2012 at 12:40 PM, Jarek Jarcec Cecho <jarcec@apache.org
> >> >wrote:
> >> >
> >> > > Hi sir,
> >> > > as far as I remember FileInputFormat is not doing recursive descent
> >> into
> >> > > subdirectories when looking for input files. Would you mind trying
to
> >> > > export directory
> >> /mnt/var/lib/hadoop/dfs/logs_sanitized_test/dt=2012-10-01
> >> > > to see if it will help? Something like
> >> > >
> >> > > sqoop export ... --export-dir
> >> > > /mnt/var/lib/hadoop/dfs/logs_sanitized_test/dt=2012-10-01 ...
> >> > >
> >> > > Jarcec
> >> > >
> >> > > On Wed, Oct 10, 2012 at 12:30:56PM -0400, Matthieu Labour wrote:
> >> > > > Hi
> >> > > >
> >> > > > I want to do the following: Export data stored in hadoop to MySql.
> >> It is
> >> > > > not working and I have been pulling my hair. I was hoping to
get a
> >> bit of
> >> > > > help. Thank you in advance
> >> > > >
> >> > > > The command is the following:
> >> > > >
> >> > > > ~/sqoop-1.4.2.bin__hadoop-1.0.0/bin/sqoop export --connect
> >> > > > jdbc:mysql://hostname:3306/analyticsdb --username username
> >> --password
> >> > > > password --table ml_ys_log_gmt_test --export-dir
> >> > > > hdfs:///mnt/var/lib/hadoop/dfs/logs_sanitized_test
> >> > > > --input-fields-terminated-by='\t'  --lines-terminated-by='\n'
> >> --verbose
> >> > > >
> >> > > > On my mysqlserver in the database analyticsdb, I do have the
> >> following
> >> > > > table ml_ys_log_gmt_test
> >> > > >
> >> > > > mysql> describe ml_ys_log_gmt_test;
> >> > > > +--------+-------------+------+-----+---------+-------+
> >> > > > | Field  | Type        | Null | Key | Default | Extra |
> >> > > > +--------+-------------+------+-----+---------+-------+
> >> > > > | mydate | varchar(32) | YES  |     | NULL    |       |
> >> > > > | mydata | varchar(32) | YES  |     | NULL    |       |
> >> > > > +--------+-------------+------+-----+---------+-------+
> >> > > >
> >> > > > I can see the logs in hdfs
> >> > > >
> >> > > > hadoop@ip-XX-XX-XX-XX:/mnt/var/lib/hadoop/steps/5$ hadoop dfs
-ls
> >> > > > hdfs:///mnt/var/lib/hadoop/dfs/logs_sanitized_test
> >> > > > Found 2 items
> >> > > > drwxr-xr-x   - hadoop supergroup          0 2012-10-10 15:23
> >> > > > /mnt/var/lib/hadoop/dfs/logs_sanitized_test/dt=2012-10-01
> >> > > > drwxr-xr-x   - hadoop supergroup          0 2012-10-10 15:23
> >> > > > /mnt/var/lib/hadoop/dfs/logs_sanitized_test/dt=2012-10-02
> >> > > >
> >> > > > and if i tail one of the file I see the correct data
> >> > > >
> >> > > > hadoop@ip-XX-XX-XX-XX:/mnt/var/lib/hadoop/steps/5$ hadoop dfs
> >> -tail -f
> >> > > >
> >> > >
> >> hdfs:///mnt/var/lib/hadoop/dfs/logs_sanitized_test/dt=2012-10-01/part-m-00000
> >> > > > 20121001230101 blablabla1
> >> > > > 20121001230202 blablabla2
> >> > > >
> >> > > >
> >> > > > Here is the trace when I run the command. Please note that no
data
> >> get
> >> > > > transferred. I would appreciate any tips. Thanks a lot!
> >> > > >
> >> > > > Warning: /usr/lib/hbase does not exist! HBase imports will fail.
> >> > > > Please set $HBASE_HOME to the root of your HBase installation.
> >> > > > 12/10/10 16:25:25 DEBUG tool.BaseSqoopTool: Enabled debug logging.
> >> > > > 12/10/10 16:25:25 WARN tool.BaseSqoopTool: Setting your password
on
> >> the
> >> > > > command-line is insecure. Consider using -P instead.
> >> > > > 12/10/10 16:25:25 DEBUG sqoop.ConnFactory: Loaded manager factory:
> >> > > > com.cloudera.sqoop.manager.DefaultManagerFactory
> >> > > > 12/10/10 16:25:25 DEBUG sqoop.ConnFactory: Trying ManagerFactory:
> >> > > > com.cloudera.sqoop.manager.DefaultManagerFactory
> >> > > > 12/10/10 16:25:25 DEBUG manager.DefaultManagerFactory: Trying
with
> >> > > scheme:
> >> > > > jdbc:mysql:
> >> > > > 12/10/10 16:25:25 INFO manager.MySQLManager: Preparing to use
a
> >> MySQL
> >> > > > streaming resultset.
> >> > > > 12/10/10 16:25:25 DEBUG sqoop.ConnFactory: Instantiated ConnManager
> >> > > > org.apache.sqoop.manager.MySQLManager@5ef4f44a
> >> > > > 12/10/10 16:25:25 INFO tool.CodeGenTool: Beginning code generation
> >> > > > 12/10/10 16:25:25 DEBUG manager.SqlManager: No connection
> >> paramenters
> >> > > > specified. Using regular API for making connection.
> >> > > > 12/10/10 16:25:26 DEBUG manager.SqlManager: Using fetchSize for
next
> >> > > query:
> >> > > > -2147483648
> >> > > > 12/10/10 16:25:26 INFO manager.SqlManager: Executing SQL statement:
> >> > > SELECT
> >> > > > t.* FROM `ml_ys_log_gmt_test` AS t LIMIT 1
> >> > > > 12/10/10 16:25:26 DEBUG orm.ClassWriter: selected columns:
> >> > > > 12/10/10 16:25:26 DEBUG orm.ClassWriter:   mydate
> >> > > > 12/10/10 16:25:26 DEBUG orm.ClassWriter:   mydata
> >> > > > 12/10/10 16:25:26 DEBUG manager.SqlManager: Using fetchSize for
next
> >> > > query:
> >> > > > -2147483648
> >> > > > 12/10/10 16:25:26 INFO manager.SqlManager: Executing SQL statement:
> >> > > SELECT
> >> > > > t.* FROM `ml_ys_log_gmt_test` AS t LIMIT 1
> >> > > > 12/10/10 16:25:26 DEBUG orm.ClassWriter: Writing source file:
> >> > > >
> >> > >
> >> /tmp/sqoop-hadoop/compile/7f5cd67c0aa5dbf20256f72b30ae922b/ml_ys_log_gmt_test.java
> >> > > > 12/10/10 16:25:26 DEBUG orm.ClassWriter: Table name:
> >> ml_ys_log_gmt_test
> >> > > > 12/10/10 16:25:26 DEBUG orm.ClassWriter: Columns: mydate:12,
> >> mydata:12,
> >> > > > 12/10/10 16:25:26 DEBUG orm.ClassWriter: sourceFilename is
> >> > > > ml_ys_log_gmt_test.java
> >> > > > 12/10/10 16:25:26 DEBUG orm.CompilationManager: Found existing
> >> > > > /tmp/sqoop-hadoop/compile/7f5cd67c0aa5dbf20256f72b30ae922b/
> >> > > > 12/10/10 16:25:26 INFO orm.CompilationManager: HADOOP_HOME is
> >> > > /home/hadoop
> >> > > > 12/10/10 16:25:26 INFO orm.CompilationManager: Found hadoop core
> >> jar at:
> >> > > > /home/hadoop/hadoop-core.jar
> >> > > > 12/10/10 16:25:26 DEBUG orm.CompilationManager: Adding source
file:
> >> > > >
> >> > >
> >> /tmp/sqoop-hadoop/compile/7f5cd67c0aa5dbf20256f72b30ae922b/ml_ys_log_gmt_test.java
> >> > > > 12/10/10 16:25:26 DEBUG orm.CompilationManager: Invoking javac
with
> >> args:
> >> > > > 12/10/10 16:25:26 DEBUG orm.CompilationManager:   -sourcepath
> >> > > > 12/10/10 16:25:26 DEBUG orm.CompilationManager:
> >> > > > /tmp/sqoop-hadoop/compile/7f5cd67c0aa5dbf20256f72b30ae922b/
> >> > > > 12/10/10 16:25:26 DEBUG orm.CompilationManager:   -d
> >> > > > 12/10/10 16:25:26 DEBUG orm.CompilationManager:
> >> > > > /tmp/sqoop-hadoop/compile/7f5cd67c0aa5dbf20256f72b30ae922b/
> >> > > > 12/10/10 16:25:26 DEBUG orm.CompilationManager:   -classpath
> >> > > > 12/10/10 16:25:26 DEBUG orm.CompilationManager:
> >> > > >
> >> > >
> >> /home/hadoop/conf:/usr/lib/jvm/java-6-sun/lib/tools.jar:/home/hadoop:/home/hadoop/hadoop-core-1.0.3.jar:/home/hadoop/lib/activation-1.1.jar:/home/hadoop/lib/annotations.jar:/home/hadoop/lib/ant-1.8.1.jar:/home/hadoop/lib/ant-launcher-1.8.1.jar:/home/hadoop/lib/ant-nodeps-1.8.1.jar:/home/hadoop/lib/apache-jar-resource-bundle-1.4.jar:/home/hadoop/lib/asm-3.1.jar:/home/hadoop/lib/avro-1.5.3.jar:/home/hadoop/lib/avro-compiler-1.5.3.jar:/home/hadoop/lib/avro-ipc-1.5.3.jar:/home/hadoop/lib/avro-maven-plugin-1.5.3.jar:/home/hadoop/lib/aws-java-sdk-1.3.2.jar:/home/hadoop/lib/build-helper-maven-plugin-1.5.jar:/home/hadoop/lib/commons-beanutils-1.7.0.jar:/home/hadoop/lib/commons-beanutils-core-1.8.0.jar:/home/hadoop/lib/commons-cli-1.2.jar:/home/hadoop/lib/commons-codec-1.5.jar:/home/hadoop/lib/commons-collections-3.2.1.jar:/home/hadoop/lib/commons-configuration-1.6.jar:/home/hadoop/lib/commons-daemon-1.0.1.jar:/home/hadoop/lib/commons-digester-1.8.jar:/home/hadoop/lib/commons-el-1.0.jar:/home/hadoop/lib/commons-httpclient-3.1.jar:/home/hadoop/lib/commons-io-2.4.jar:/home/hadoop/lib/commons-lang-2.5.jar:/home/hadoop/lib/commons-logging-1.1.1.jar:/home/hadoop/lib/commons-logging-adapters-1.1.1.jar:/home/hadoop/lib/commons-logging-api-1.1.1.jar:/home/hadoop/lib/commons-math-2.1.jar:/home/hadoop/lib/commons-net-3.1.jar:/home/hadoop/lib/com.sun.el_1.0.0.v201004190952.jar:/home/hadoop/lib/core-3.1.1.jar:/home/hadoop/lib/docbkx-maven-plugin-2.0.13.jar:/home/hadoop/lib/ecj-3.6.jar:/home/hadoop/lib/emr-metrics-1.0.jar:/home/hadoop/lib/emr-s3distcp-1.0.jar:/home/hadoop/lib/file-management-1.2.1.jar:/home/hadoop/lib/ftplet-api-1.0.0.jar:/home/hadoop/lib/ftpserver-core-1.0.0.jar:/home/hadoop/lib/ftpserver-deprecated-1.0.0-M2.jar:/home/hadoop/lib/gson-1.6.jar:/home/hadoop/lib/guava-12.0.jar:/home/hadoop/lib/hadoop-ant-1.0.3.jar:/home/hadoop/lib/hadoop-ant.jar:/home/hadoop/lib/hadoop-capacity-scheduler-1.0.3.jar:/home/hadoop/lib/hadoop-client-1.0.3.jar:/home/hadoop/lib/hadoop-core-1.0.3.jar:/home/hadoop/lib/hadoop-core.jar:/home/hadoop/lib/hadoop-examples-1.0.3.jar:/home/hadoop/lib/hadoop-examples.jar:/home/hadoop/lib/hadoop-fairscheduler-1.0.3.jar:/home/hadoop/lib/hadoop-minicluster-1.0.3.jar:/home/hadoop/lib/hadoop-state-pusher-1.0.jar:/home/hadoop/lib/hadoop-test-1.0.3.jar:/home/hadoop/lib/hadoop-test.jar:/home/hadoop/lib/hadoop-thriftfs-1.0.3.jar:/home/hadoop/lib/hadoop-tools-1.0.3.jar:/home/hadoop/lib/hadoop-tools.jar:/home/hadoop/lib/hamcrest-all-1.1.jar:/home/hadoop/lib/hamcrest-core-1.1.jar:/home/hadoop/lib/hbase-0.92.0.jar:/home/hadoop/lib/high-scale-lib-1.1.1.jar:/home/hadoop/lib/high-scale-lib.jar:/home/hadoop/lib/hsqldb-1.8.0.10.jar:/home/hadoop/lib/httpclient-4.1.1.jar:/home/hadoop/lib/httpclient-cache-4.1.1.jar:/home/hadoop/lib/httpcore-4.1.jar:/home/hadoop/lib/httpcore-nio-4.1.jar:/home/hadoop/lib/httpmime-4.1.1.jar:/home/hadoop/lib/icu4j-4_0_1.jar:/home/hadoop/lib/jackson-core-asl-1.8.8.jar:/home/hadoop/lib/jackson-mapper-asl-1.8.8.jar:/home/hadoop/lib/jamon-anttask-2.4.0.jar:/home/hadoop/lib/jamon-api-2.3.0.jar:/home/hadoop/lib/jamon-maven-plugin-2.3.4.jar:/home/hadoop/lib/jamon-processor-2.4.1.jar:/home/hadoop/lib/jamon-runtime-2.4.0.jar:/home/hadoop/lib/jasper-compiler-5.5.23.jar:/home/hadoop/lib/jasper-runtime-5.5.23.jar:/home/hadoop/lib/java_util_concurrent_chm.jar:/home/hadoop/lib/java_util_hashtable.jar:/home/hadoop/lib/javax.el_2.1.0.v201004190952.jar:/home/hadoop/lib/javax.servlet.jsp_2.1.0.v201004190952.jar:/home/hadoop/lib/jaxb-api-2.1.jar:/home/hadoop/lib/jaxb-impl-2.1.12.jar:/home/hadoop/lib/jcommon-0.9.6.jar:/home/hadoop/lib/jersey-core-1.4.jar:/home/hadoop/lib/jersey-json-1.4.jar:/home/hadoop/lib/jersey-server-1.4.jar:/home/hadoop/lib/jettison-1.1.jar:/home/hadoop/lib/jetty-6.1.26.jar:/home/hadoop/lib/jetty-ajp-7.5.4.v20111024.jar:/home/hadoop/lib/jetty-all-7.5.4.v20111024-javadoc.jar:/home/hadoop/lib/jetty-annotations-7.5.4.v20111024.jar:/home/hadoop/lib/jetty-client-7.5.4.v20111024.jar:/home/hadoop/lib/jetty-continuation-7.5.4.v20111024.jar:/home/hadoop/lib/jetty-deploy-7.5.4.v20111024.jar:/home/hadoop/lib/jetty-http-7.5.4.v20111024.jar:/home/hadoop/lib/jetty-io-7.5.4.v20111024.jar:/home/hadoop/lib/jetty-jmx-7.5.4.v20111024.jar:/home/hadoop/lib/jetty-jndi-7.5.4.v20111024.jar:/home/hadoop/lib/jetty-jsp-2.1-7.5.4.v20111024.jar:/home/hadoop/lib/jetty-overlay-deployer-7.5.4.v20111024.jar:/home/hadoop/lib/jetty-plus-7.5.4.v20111024.jar:/home/hadoop/lib/jetty-policy-7.5.4.v20111024.jar:/home/hadoop/lib/jetty-rewrite-7.5.4.v20111024.jar:/home/hadoop/lib/jetty-security-7.5.4.v20111024.jar:/home/hadoop/lib/jetty-server-7.5.4.v20111024.jar:/home/hadoop/lib/jetty-servlet-7.5.4.v20111024.jar:/home/hadoop/lib/jetty-servlets-7.5.4.v20111024.jar:/home/hadoop/lib/jetty-spring-7.5.4.v20111024.jar:/home/hadoop/lib/jetty-util-6.1.26.jar:/home/hadoop/lib/jetty-util-7.5.4.v20111024.jar:/home/hadoop/lib/jetty-webapp-7.5.4.v20111024.jar:/home/hadoop/lib/jetty-websocket-7.5.4.v20111024.jar:/home/hadoop/lib/jetty-xml-7.5.4.v20111024.jar:/home/hadoop/lib/jfreechart-0.9.21.jar:/home/hadoop/lib/joda-time-2.1.jar:/home/hadoop/lib/jruby-complete-no-joda-1.6.5.jar:/home/hadoop/lib/jsp-2.1-6.1.14.jar:/home/hadoop/lib/jsp-api-2.1-6.1.14.jar:/home/hadoop/lib/jsp-impl-2.1.3-b10.jar:/home/hadoop/lib/jsr305.jar:/home/hadoop/lib/jsr311-api-1.1.1.jar:/home/hadoop/lib/junit-4.8.1.jar:/home/hadoop/lib/junit.jar:/home/hadoop/lib/jython.jar:/home/hadoop/lib/kfs-0.2.2.jar:/home/hadoop/lib/libthrift-0.7.0.jar:/home/hadoop/lib/log4j-1.2.16.jar:/home/hadoop/lib/mail-1.4.3.jar:/home/hadoop/lib/mina-core-2.0.0-M5.jar:/home/hadoop/lib/mockito-all-1.8.5.jar:/home/hadoop/lib/netty-3.2.4.Final.jar:/home/hadoop/lib/opencsv-1.8.jar:/home/hadoop/lib/org.apache.taglibs.standard.glassfish_1.2.0.v201004190952.jar:/home/hadoop/lib/paranamer-2.3.jar:/home/hadoop/lib/plexus-active-collections-1.0-beta-2.jar:/home/hadoop/lib/plexus-build-api-0.0.4.jar:/home/hadoop/lib/plexus-compiler-api-1.8.1.jar:/home/hadoop/lib/plexus-compiler-javac-1.5.3.jar:/home/hadoop/lib/plexus-compiler-manager-1.5.3.jar:/home/hadoop/lib/plexus-digest-1.0.jar:/home/hadoop/lib/plexus-interpolation-1.12.jar:/home/hadoop/lib/plexus-io-1.0-alpha-4.jar:/home/hadoop/lib/plexus-resources-1.0-alpha-5.jar:/home/hadoop/lib/plexus-utils-2.1.jar:/home/hadoop/lib/plexus-velocity-1.1.3.jar:/home/hadoop/lib/protobuf-java-2.4.0a.jar:/home/hadoop/lib/servlet-api-2.5.jar:/home/hadoop/lib/slf4j-api-1.6.1.jar:/home/hadoop/lib/slf4j-log4j12-1.6.1.jar:/home/hadoop/lib/smart-cli-parser.jar:/home/hadoop/lib/snappy-java-1.0.3.2.jar:/home/hadoop/lib/stax-1.2.0.jar:/home/hadoop/lib/stax-api-1.0.1.jar:/home/hadoop/lib/surefire-api-2.4.3.jar:/home/hadoop/lib/surefire-booter-2.4.3.jar:/home/hadoop/lib/surefire-junit4-2.10.jar:/home/hadoop/lib/typica.jar:/home/hadoop/lib/velocity-1.7.jar:/home/hadoop/lib/visualization-datasource-1.1.1.jar:/home/hadoop/lib/xmlenc-0.52.jar:/home/hadoop/lib/xml-maven-plugin-1.0-beta-3.jar:/home/hadoop/lib/zookeeper-3.4.2.jar:/home/hadoop/lib/jsp-2.1/*.jar:/home/hadoop/sqoop-1.4.2.bin__hadoop-1.0.0/bin/../conf::/home/hadoop/sqoop-1.4.2.bin__hadoop-1.0.0/bin/../lib/ant-contrib-1.0b3.jar:/home/hadoop/sqoop-1.4.2.bin__hadoop-1.0.0/bin/../lib/ant-eclipse-1.0-jvm1.2.jar:/home/hadoop/sqoop-1.4.2.bin__hadoop-1.0.0/bin/../lib/avro-1.5.3.jar:/home/hadoop/sqoop-1.4.2.bin__hadoop-1.0.0/bin/../lib/avro-ipc-1.5.3.jar:/home/hadoop/sqoop-1.4.2.bin__hadoop-1.0.0/bin/../lib/avro-mapred-1.5.3.jar:/home/hadoop/sqoop-1.4.2.bin__hadoop-1.0.0/bin/../lib/commons-io-1.4.jar:/home/hadoop/sqoop-1.4.2.bin__hadoop-1.0.0/bin/../lib/hsqldb-1.8.0.10.jar:/home/hadoop/sqoop-1.4.2.bin__hadoop-1.0.0/bin/../lib/jackson-core-asl-1.7.3.jar:/home/hadoop/sqoop-1.4.2.bin__hadoop-1.0.0/bin/../lib/jackson-mapper-asl-1.7.3.jar:/home/hadoop/sqoop-1.4.2.bin__hadoop-1.0.0/bin/../lib/jopt-simple-3.2.jar:/home/hadoop/sqoop-1.4.2.bin__hadoop-1.0.0/bin/../lib/mysql-connector-java-5.1.22-bin.jar:/home/hadoop/sqoop-1.4.2.bin__hadoop-1.0.0/bin/../lib/paranamer-2.3.jar:/home/hadoop/sqoop-1.4.2.bin__hadoop-1.0.0/bin/../lib/snappy-java-1.0.3.2.jar:/home/hadoop/sqoop-1.4.2.bin__hadoop-1.0.0/bin/../sqoop-1.4.2.jar:/home/hadoop/sqoop-1.4.2.bin__hadoop-1.0.0/bin/../sqoop-test-1.4.2.jar::/home/hadoop/hadoop-core.jar:/home/hadoop/sqoop-1.4.2.bin__hadoop-1.0.0/sqoop-1.4.2.jar
> >> > > > Note:
> >> > > >
> >> > >
> >> /tmp/sqoop-hadoop/compile/7f5cd67c0aa5dbf20256f72b30ae922b/ml_ys_log_gmt_test.java
> >> > > > uses or overrides a deprecated API.
> >> > > > Note: Recompile with -Xlint:deprecation for details.
> >> > > > 12/10/10 16:25:28 INFO orm.CompilationManager: Writing jar file:
> >> > > >
> >> > >
> >> /tmp/sqoop-hadoop/compile/7f5cd67c0aa5dbf20256f72b30ae922b/ml_ys_log_gmt_test.jar
> >> > > > 12/10/10 16:25:28 INFO orm.CompilationManager: Writing jar file:
> >> > > >
> >> > >
> >> /tmp/sqoop-hadoop/compile/7f5cd67c0aa5dbf20256f72b30ae922b/ml_ys_log_gmt_test.jar
> >> > > > 12/10/10 16:25:28 DEBUG orm.CompilationManager: Scanning for
.class
> >> files
> >> > > > in directory:
> >> /tmp/sqoop-hadoop/compile/7f5cd67c0aa5dbf20256f72b30ae922b
> >> > > > 12/10/10 16:25:28 DEBUG orm.CompilationManager: Got classfile:
> >> > > >
> >> > >
> >> /tmp/sqoop-hadoop/compile/7f5cd67c0aa5dbf20256f72b30ae922b/ml_ys_log_gmt_test.class
> >> > > > -> ml_ys_log_gmt_test.class
> >> > > > 12/10/10 16:25:28 DEBUG orm.CompilationManager: Finished writing
> >> jar file
> >> > > >
> >> > >
> >> /tmp/sqoop-hadoop/compile/7f5cd67c0aa5dbf20256f72b30ae922b/ml_ys_log_gmt_test.jar
> >> > > > 12/10/10 16:25:28 INFO mapreduce.ExportJobBase: Beginning export
of
> >> > > > ml_ys_log_gmt_test
> >> > > > 12/10/10 16:25:29 WARN mapreduce.ExportJobBase: null FileStatus
> >> object in
> >> > > > isSequenceFiles(); assuming false.
> >> > > > 12/10/10 16:25:29 DEBUG mapreduce.JobBase: Using InputFormat:
class
> >> > > > org.apache.sqoop.mapreduce.ExportInputFormat
> >> > > > 12/10/10 16:25:29 DEBUG mapreduce.JobBase: Adding to job classpath:
> >> > > > file:/home/hadoop/sqoop-1.4.2.bin__hadoop-1.0.0/sqoop-1.4.2.jar
> >> > > > 12/10/10 16:25:29 DEBUG mapreduce.JobBase: Adding to job classpath:
> >> > > >
> >> > >
> >> file:/home/hadoop/sqoop-1.4.2.bin__hadoop-1.0.0/lib/mysql-connector-java-5.1.22-bin.jar
> >> > > > 12/10/10 16:25:29 DEBUG mapreduce.JobBase: Adding to job classpath:
> >> > > > file:/home/hadoop/sqoop-1.4.2.bin__hadoop-1.0.0/sqoop-1.4.2.jar
> >> > > > 12/10/10 16:25:29 DEBUG mapreduce.JobBase: Adding to job classpath:
> >> > > > file:/home/hadoop/sqoop-1.4.2.bin__hadoop-1.0.0/sqoop-1.4.2.jar
> >> > > > 12/10/10 16:25:29 DEBUG mapreduce.JobBase: Adding to job classpath:
> >> > > >
> >> > >
> >> file:/home/hadoop/sqoop-1.4.2.bin__hadoop-1.0.0/lib/snappy-java-1.0.3.2.jar
> >> > > > 12/10/10 16:25:29 DEBUG mapreduce.JobBase: Adding to job classpath:
> >> > > >
> >> file:/home/hadoop/sqoop-1.4.2.bin__hadoop-1.0.0/lib/commons-io-1.4.jar
> >> > > > 12/10/10 16:25:29 DEBUG mapreduce.JobBase: Adding to job classpath:
> >> > > >
> >> file:/home/hadoop/sqoop-1.4.2.bin__hadoop-1.0.0/lib/avro-ipc-1.5.3.jar
> >> > > > 12/10/10 16:25:29 DEBUG mapreduce.JobBase: Adding to job classpath:
> >> > > >
> >> > >
> >> file:/home/hadoop/sqoop-1.4.2.bin__hadoop-1.0.0/lib/jackson-core-asl-1.7.3.jar
> >> > > > 12/10/10 16:25:29 DEBUG mapreduce.JobBase: Adding to job classpath:
> >> > > > file:/home/hadoop/sqoop-1.4.2.bin__hadoop-1.0.0/lib/avro-1.5.3.jar
> >> > > > 12/10/10 16:25:29 DEBUG mapreduce.JobBase: Adding to job classpath:
> >> > > >
> >> > >
> >> file:/home/hadoop/sqoop-1.4.2.bin__hadoop-1.0.0/lib/ant-eclipse-1.0-jvm1.2.jar
> >> > > > 12/10/10 16:25:29 DEBUG mapreduce.JobBase: Adding to job classpath:
> >> > > >
> >> > >
> >> file:/home/hadoop/sqoop-1.4.2.bin__hadoop-1.0.0/lib/jackson-mapper-asl-1.7.3.jar
> >> > > > 12/10/10 16:25:29 DEBUG mapreduce.JobBase: Adding to job classpath:
> >> > > >
> >> file:/home/hadoop/sqoop-1.4.2.bin__hadoop-1.0.0/lib/ant-contrib-1.0b3.jar
> >> > > > 12/10/10 16:25:29 DEBUG mapreduce.JobBase: Adding to job classpath:
> >> > > >
> >> file:/home/hadoop/sqoop-1.4.2.bin__hadoop-1.0.0/lib/avro-mapred-1.5.3.jar
> >> > > > 12/10/10 16:25:29 DEBUG mapreduce.JobBase: Adding to job classpath:
> >> > > >
> >> > >
> >> file:/home/hadoop/sqoop-1.4.2.bin__hadoop-1.0.0/lib/mysql-connector-java-5.1.22-bin.jar
> >> > > > 12/10/10 16:25:29 DEBUG mapreduce.JobBase: Adding to job classpath:
> >> > > >
> >> file:/home/hadoop/sqoop-1.4.2.bin__hadoop-1.0.0/lib/paranamer-2.3.jar
> >> > > > 12/10/10 16:25:29 DEBUG mapreduce.JobBase: Adding to job classpath:
> >> > > >
> >> file:/home/hadoop/sqoop-1.4.2.bin__hadoop-1.0.0/lib/jopt-simple-3.2.jar
> >> > > > 12/10/10 16:25:29 DEBUG mapreduce.JobBase: Adding to job classpath:
> >> > > >
> >> file:/home/hadoop/sqoop-1.4.2.bin__hadoop-1.0.0/lib/hsqldb-1.8.0.10.jar
> >> > > > 12/10/10 16:25:29 INFO mapred.JobClient: Default number of map
> >> tasks: 4
> >> > > > 12/10/10 16:25:29 INFO mapred.JobClient: Default number of reduce
> >> tasks:
> >> > > 0
> >> > > > 12/10/10 16:25:30 INFO mapred.JobClient: Setting group to hadoop
> >> > > > 12/10/10 16:25:30 INFO input.FileInputFormat: Total input paths
to
> >> > > process
> >> > > > : 2
> >> > > > 12/10/10 16:25:30 DEBUG mapreduce.ExportInputFormat: Target
> >> numMapTasks=4
> >> > > > 12/10/10 16:25:30 DEBUG mapreduce.ExportInputFormat: Total input
> >> bytes=0
> >> > > > 12/10/10 16:25:30 DEBUG mapreduce.ExportInputFormat: maxSplitSize=0
> >> > > > 12/10/10 16:25:30 INFO input.FileInputFormat: Total input paths
to
> >> > > process
> >> > > > : 2
> >> > > > 12/10/10 16:25:30 DEBUG mapreduce.ExportInputFormat: Generated
> >> splits:
> >> > > > 12/10/10 16:25:30 INFO mapred.JobClient: Running job:
> >> > > job_201210101503_0017
> >> > > > 12/10/10 16:25:31 INFO mapred.JobClient:  map 0% reduce 0%
> >> > > > 12/10/10 16:25:48 INFO mapred.JobClient: Job complete:
> >> > > job_201210101503_0017
> >> > > > 12/10/10 16:25:48 INFO mapred.JobClient: Counters: 4
> >> > > > 12/10/10 16:25:48 INFO mapred.JobClient:   Job Counters
> >> > > > 12/10/10 16:25:48 INFO mapred.JobClient:     SLOTS_MILLIS_MAPS=8498
> >> > > > 12/10/10 16:25:48 INFO mapred.JobClient:     Total time spent
by all
> >> > > > reduces waiting after reserving slots (ms)=0
> >> > > > 12/10/10 16:25:48 INFO mapred.JobClient:     Total time spent
by
> >> all maps
> >> > > > waiting after reserving slots (ms)=0
> >> > > > 12/10/10 16:25:48 INFO mapred.JobClient:     SLOTS_MILLIS_REDUCES=0
> >> > > > 12/10/10 16:25:48 INFO mapreduce.ExportJobBase: Transferred 0
bytes
> >> in
> >> > > > 19.0791 seconds (0 bytes/sec)
> >> > > > 12/10/10 16:25:48 INFO mapreduce.ExportJobBase: Exported 0 records.
> >> > >
> >> >
> >> >
> >> >
> >> > --
> >> > Matthieu Labour, Engineering | *Action**X* |
> >> > 584 Broadway, Suite 1002 – NY, NY 10012
> >> > 415-994-3480 (m)
> >>
> >
> >
> >
> > --
> > Matthieu Labour, Engineering | *Action**X* |
> > 584 Broadway, Suite 1002 – NY, NY 10012
> > 415-994-3480 (m)
> >
> >
> 
> 
> -- 
> Matthieu Labour, Engineering | *Action**X* |
> 584 Broadway, Suite 1002 – NY, NY 10012
> 415-994-3480 (m)

Mime
View raw message