From sqoop-user-return-226-apmail-incubator-sqoop-user-archive=incubator.apache.org@incubator.apache.org Fri Feb 3 08:33:35 2012 Return-Path: X-Original-To: apmail-incubator-sqoop-user-archive@minotaur.apache.org Delivered-To: apmail-incubator-sqoop-user-archive@minotaur.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id A8DE39CDA for ; Fri, 3 Feb 2012 08:33:35 +0000 (UTC) Received: (qmail 95746 invoked by uid 500); 3 Feb 2012 08:33:34 -0000 Delivered-To: apmail-incubator-sqoop-user-archive@incubator.apache.org Received: (qmail 95512 invoked by uid 500); 3 Feb 2012 08:33:24 -0000 Mailing-List: contact sqoop-user-help@incubator.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: sqoop-user@incubator.apache.org Delivered-To: mailing list sqoop-user@incubator.apache.org Received: (qmail 95498 invoked by uid 99); 3 Feb 2012 08:33:23 -0000 Received: from athena.apache.org (HELO athena.apache.org) (140.211.11.136) by apache.org (qpsmtpd/0.29) with ESMTP; Fri, 03 Feb 2012 08:33:23 +0000 X-ASF-Spam-Status: No, hits=1.5 required=5.0 tests=HTML_MESSAGE,NORMAL_HTTP_TO_IP,RCVD_IN_DNSWL_LOW,SPF_PASS X-Spam-Check-By: apache.org Received-SPF: pass (athena.apache.org: domain of bhavesh25shah@gmail.com designates 209.85.216.175 as permitted sender) Received: from [209.85.216.175] (HELO mail-qy0-f175.google.com) (209.85.216.175) by apache.org (qpsmtpd/0.29) with ESMTP; Fri, 03 Feb 2012 08:33:17 +0000 Received: by qcso7 with SMTP id o7so1841215qcs.6 for ; Fri, 03 Feb 2012 00:32:57 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :content-type; bh=2lNMr1rw/D2RLz7Lxr149hvP4Q9G4aXlia/jfFFFmRU=; b=o7vjnAGtbPJ0MU9l6RUmBOSxM8jkkY6VLffEfFrcbuh8CHtpumxcXpR1n0cwNVIRqy zXIvvxTtFV3TZsACf1aqgqWAF/pjhBM7hLOv7x8fyAkb6VH7ndrF/92893nnhXkrd6cf m6kAHkr73b005MwWiu5nz5qtcnSs4v18BDvjM= MIME-Version: 1.0 Received: by 10.229.135.193 with SMTP id o1mr2320769qct.74.1328257976997; Fri, 03 Feb 2012 00:32:56 -0800 (PST) Received: by 10.229.159.9 with HTTP; Fri, 3 Feb 2012 00:32:56 -0800 (PST) In-Reply-To: <1FF82D72-C33E-4F15-B7A9-0626EA00FED1@gmail.com> References: <593E369A-DF39-4201-933A-3CB68C559C7C@gmail.com> <1FF82D72-C33E-4F15-B7A9-0626EA00FED1@gmail.com> Date: Fri, 3 Feb 2012 14:02:56 +0530 Message-ID: Subject: Re: Table not creating in hive From: Bhavesh Shah To: sqoop-user@incubator.apache.org Content-Type: multipart/alternative; boundary=00248c768f26e3962004b80b281c --00248c768f26e3962004b80b281c Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: quoted-printable Hello Alex, I have checked the rights and i am using same user for importing too. Do I need to install the hive again so that I cam=3Dn solve my problem?? --=20 Regards, Bhavesh Shah On Fri, Feb 3, 2012 at 1:45 PM, alo alt wrote: > check the hdfs for the rights: > hadoop dfs -ls /path/ > > the config looks okay, so I assume that some tables was created in hue > with other rights (rw hue user, r for all other). That can you check with > -ls or in the WebUI -> browse filesystem -> click trough /hive/warehouse > > use the same user for import, operations and hue. Or enable kerberos auth > ;) > > best, > Alex > > -- > Alexander Lorenz > http://mapredit.blogspot.com > > On Feb 3, 2012, at 9:09 AM, Bhavesh Shah wrote: > > > Hello Alex, > > Thanks for your reply. > > I have observed one thing this thing is happening with some tables only= . > While some tables import with the complete data while some not. > > But the issue is that though import completely or not their entry is no= t > listed in "SHOW TABLE" command. > > > > Why this is happening I am not getting. > > Is there any problem in configuration? > > > > > > > > - > > Thanks and Regards, > > Bhavesh Shah > > > > > > > > > > On Fri, Feb 3, 2012 at 1:34 PM, alo alt > wrote: > > 0 records exported, so the table will be not created since they have no > data. Also check the file: > > > /java.io.IOException: Destination > '/home/hadoop/sqoop-1.3.0-cdh3u1/bin/./Appointment.java' already exists > > > > sqoop will move it, but it still exists. > > > > - Alex > > > > -- > > Alexander Lorenz > > http://mapredit.blogspot.com > > > > On Feb 3, 2012, at 6:22 AM, Bhavesh Shah wrote: > > > > > > > > > > > ---------- Forwarded message ---------- > > > From: Bhavesh Shah > > > Date: Fri, Feb 3, 2012 at 10:38 AM > > > Subject: Re: Table not creating in hive > > > To: dev@hive.apache.org, sqoop-user@incubator.apache.org > > > > > > > > > Hello Bejoy & Alexis, > > > Thanks for your reply. > > > I am using mysql as a database (and not derby) > > > Previuosly I am using --split by 1 and is working fine, but when I > installed MySQL and change the database then I got the error for --split-= by > option and thats why I use -m 1. > > > But again due to that it is showing that data retrieve is 0. > > > > > > Here are the logs. > > > hadoop@ubuntu:~/sqoop-1.3.0-cdh3u1/bin$ ./sqoop-import --connect > 'jdbc:sqlserver://192.168.1.1;username=3Dabcd;password=3D12345;database= =3DFIGMDHadoopTest' > --table Appointment --hive-table appointment -m 1 --hive-import --verbose > > > > > > 12/01/31 22:33:40 DEBUG tool.BaseSqoopTool: Enabled debug logging. > > > 12/01/31 22:33:40 INFO tool.BaseSqoopTool: Using Hive-specific > delimiters for output. You can override > > > 12/01/31 22:33:40 INFO tool.BaseSqoopTool: delimiters with > --fields-terminated-by, etc. > > > 12/01/31 22:33:40 DEBUG sqoop.ConnFactory: Added factory > com.microsoft.sqoop.SqlServer.MSSQLServerManagerFactory specified by > /home/hadoop/sqoop-1.3.0-cdh3u1/conf/managers.d/mssqoop-sqlserver > > > 12/01/31 22:33:40 DEBUG sqoop.ConnFactory: Loaded manager factory: > com.microsoft.sqoop.SqlServer.MSSQLServerManagerFactory > > > 12/01/31 22:33:40 DEBUG sqoop.ConnFactory: Loaded manager factory: > com.cloudera.sqoop.manager.DefaultManagerFactory > > > 12/01/31 22:33:40 DEBUG sqoop.ConnFactory: Trying ManagerFactory: > com.microsoft.sqoop.SqlServer.MSSQLServerManagerFactory > > > 12/01/31 22:33:40 INFO SqlServer.MSSQLServerManagerFactory: Using > Microsoft's SQL Server - Hadoop Connector > > > 12/01/31 22:33:40 INFO manager.SqlManager: Using default fetchSize of > 1000 > > > 12/01/31 22:33:40 DEBUG sqoop.ConnFactory: Instantiated ConnManager > com.microsoft.sqoop.SqlServer.MSSQLServerManager@116471f > > > 12/01/31 22:33:40 INFO tool.CodeGenTool: Beginning code generation > > > 12/01/31 22:33:40 DEBUG manager.SqlManager: No connection paramenters > specified. Using regular API for making connection. > > > 12/01/31 22:33:41 DEBUG manager.SqlManager: Using fetchSize for next > query: 1000 > > > 12/01/31 22:33:41 INFO manager.SqlManager: Executing SQL statement: > SELECT TOP 1 * FROM [Appointment] > > > 12/01/31 22:33:41 DEBUG manager.SqlManager: Using fetchSize for next > query: 1000 > > > 12/01/31 22:33:41 INFO manager.SqlManager: Executing SQL statement: > SELECT TOP 1 * FROM [Appointment] > > > 12/01/31 22:33:41 DEBUG orm.ClassWriter: selected columns: > > > 12/01/31 22:33:41 DEBUG orm.ClassWriter: AppointmentUid > > > 12/01/31 22:33:41 DEBUG orm.ClassWriter: ExternalID > > > 12/01/31 22:33:41 DEBUG orm.ClassWriter: PatientUid > > > 12/01/31 22:33:41 DEBUG orm.ClassWriter: StartTime > > > 12/01/31 22:33:41 DEBUG orm.ClassWriter: EndTime > > > 12/01/31 22:33:41 DEBUG orm.ClassWriter: ResourceUid > > > 12/01/31 22:33:41 DEBUG orm.ClassWriter: Note > > > 12/01/31 22:33:41 DEBUG orm.ClassWriter: AppointmentTypeUid > > > 12/01/31 22:33:41 DEBUG orm.ClassWriter: AppointmentStatusUid > > > 12/01/31 22:33:41 DEBUG orm.ClassWriter: CheckOutNote > > > 12/01/31 22:33:41 DEBUG orm.ClassWriter: CreatedDate > > > 12/01/31 22:33:41 DEBUG orm.ClassWriter: CreatedByUid > > > 12/01/31 22:33:41 DEBUG orm.ClassWriter: Writing source file: > /tmp/sqoop-hadoop/compile/a7d94d7420001a743a4746242116beff/Appointment.ja= va > > > 12/01/31 22:33:41 DEBUG orm.ClassWriter: Table name: Appointment > > > 12/01/31 22:33:41 DEBUG orm.ClassWriter: Columns: AppointmentUid:1, > ExternalID:12, PatientUid:1, StartTime:93, EndTime:93, ResourceUid:1, > RenderringProviderUid:1, ReferringProviderUid:1, ServiceLocationUid:1, > Note:-1, AppointmentTypeUid:1, AppointmentStatusUid:1, CheckOutNote:12, > ROSxml:-1, CreatedDate:93, CreatedByUid:1, ModifiedDate:93, > ModifiedByUid:1, SingleDayAppointmentGroupUid:1, > MultiDayAppointmentGroupUid:1, > > > 12/01/31 22:33:41 DEBUG orm.ClassWriter: sourceFilename is > Appointment.java > > > 12/01/31 22:33:41 DEBUG orm.CompilationManager: Found existing > /tmp/sqoop-hadoop/compile/a7d94d7420001a743a4746242116beff/ > > > 12/01/31 22:33:41 INFO orm.CompilationManager: HADOOP_HOME is > /home/hadoop/hadoop-0.20.2-cdh3u2 > > > 12/01/31 22:33:41 DEBUG orm.CompilationManager: Adding source file: > /tmp/sqoop-hadoop/compile/a7d94d7420001a743a4746242116beff/Appointment.ja= va > > > 12/01/31 22:33:41 DEBUG orm.CompilationManager: Invoking javac with > args: > > > 12/01/31 22:33:41 DEBUG orm.CompilationManager: -sourcepath > > > 12/01/31 22:33:41 DEBUG orm.CompilationManager: > /tmp/sqoop-hadoop/compile/a7d94d7420001a743a4746242116beff/ > > > 12/01/31 22:33:41 DEBUG orm.CompilationManager: -d > > > 12/01/31 22:33:41 DEBUG orm.CompilationManager: > /tmp/sqoop-hadoop/compile/a7d94d7420001a743a4746242116beff/ > > > 12/01/31 22:33:41 DEBUG orm.CompilationManager: -classpath > > > 12/01/31 22:33:41 DEBUG orm.CompilationManager: > /home/hadoop/hadoop-0.20.2-cdh3u2//conf:/usr/lib/jvm/java-6-sun-1.6.0.26/= /lib/tools.jar:/home/hadoop/hadoop-0.20.2-cdh3u2/:/home/hadoop/hadoop-0.20.= 2-cdh3u2//hadoop-core-0.20.2-cdh3u2.jar:/home/hadoop/hadoop-0.20.2-cdh3u2//= lib/ant-contrib-1.0b3.jar:/home/hadoop/hadoop-0.20.2-cdh3u2//lib/aspectjrt-= 1.6.5.jar:/home/hadoop/hadoop-0.20.2-cdh3u2//lib/aspectjtools-1.6.5.jar:/ho= me/hadoop/hadoop-0.20.2-cdh3u2//lib/commons-cli-1.2.jar:/home/hadoop/hadoop= -0.20.2-cdh3u2//lib/commons-codec-1.4.jar:/home/hadoop/hadoop-0.20.2-cdh3u2= //lib/commons-daemon-1.0.1.jar:/home/hadoop/hadoop-0.20.2-cdh3u2//lib/commo= ns-el-1.0.jar:/home/hadoop/hadoop-0.20.2-cdh3u2//lib/commons-httpclient-3.1= .jar:/home/hadoop/hadoop-0.20.2-cdh3u2//lib/commons-logging-1.0.4.jar:/home= /hadoop/hadoop-0.20.2-cdh3u2//lib/commons-logging-api-1.0.4.jar:/home/hadoo= p/hadoop-0.20.2-cdh3u2//lib/commons-net-1.4.1.jar:/home/hadoop/hadoop-0.20.= 2-cdh3u2//lib/core-3.1.1.jar:/home/hadoop/hadoop-0.20.2-cdh3u2//lib/hadoop-= fairscheduler-0.20.2-cdh3u2.jar:/home/hadoop/hadoop-0.20.2-cdh3u2//lib/hsql= db-1.8.0.10.jar:/home/hadoop/hadoop-0.20.2-cdh3u2//lib/jackson-core-asl-1.5= .2.jar:/home/hadoop/hadoop-0.20.2-cdh3u2//lib/jackson-mapper-asl-1.5.2.jar:= /home/hadoop/hadoop-0.20.2-cdh3u2//lib/jasper-compiler-5.5.12.jar:/home/had= oop/hadoop-0.20.2-cdh3u2//lib/jasper-runtime-5.5.12.jar:/home/hadoop/hadoop= -0.20.2-cdh3u2//lib/jets3t-0.6.1.jar:/home/hadoop/hadoop-0.20.2-cdh3u2//lib= /jetty-6.1.26.cloudera.1.jar:/home/hadoop/hadoop-0.20.2-cdh3u2//lib/jetty-s= ervlet-tester-6.1.26.cloudera.1.jar:/home/hadoop/hadoop-0.20.2-cdh3u2//lib/= jetty-util-6.1.26.cloudera.1.jar:/home/hadoop/hadoop-0.20.2-cdh3u2//lib/jsc= h-0.1.42.jar:/home/hadoop/hadoop-0.20.2-cdh3u2//lib/junit-4.5.jar:/home/had= oop/hadoop-0.20.2-cdh3u2//lib/kfs-0.2.2.jar:/home/hadoop/hadoop-0.20.2-cdh3= u2//lib/libfb303.jar:/home/hadoop/hadoop-0.20.2-cdh3u2//lib/libthrift.jar:/= home/hadoop/hadoop-0.20.2-cdh3u2//lib/log4j-1.2.15.jar:/home/hadoop/hadoop-= 0.20.2-cdh3u2//lib/mockito-all-1.8.2.jar:/home/hadoop/hadoop-0.20.2-cdh3u2/= /lib/oro-2.0.8.jar:/home/hadoop/hadoop-0.20.2-cdh3u2//lib/servlet-api-2.5-2= 0081211.jar:/home/hadoop/hadoop-0.20.2-cdh3u2//lib/servlet-api-2.5-6.1.14.j= ar:/home/hadoop/hadoop-0.20.2-cdh3u2//lib/slf4j-api-1.4.3.jar:/home/hadoop/= hadoop-0.20.2-cdh3u2//lib/slf4j-log4j12-1.4.3.jar:/home/hadoop/hadoop-0.20.= 2-cdh3u2//lib/xmlenc-0.52.jar:/home/hadoop/hadoop-0.20.2-cdh3u2//lib/jsp-2.= 1/jsp-2.1.jar:/home/hadoop/hadoop-0.20.2-cdh3u2//lib/jsp-2.1/jsp-api-2.1.ja= r:/home/hadoop/sqoop-1.3.0-cdh3u1/conf/::/home/hadoop/sqoop-1.3.0-cdh3u1//l= ib/ant-contrib-1.0b3.jar:/home/hadoop/sqoop-1.3.0-cdh3u1//lib/ant-eclipse-1= .0-jvm1.2.jar:/home/hadoop/sqoop-1.3.0-cdh3u1//lib/avro-1.5.1.jar:/home/had= oop/sqoop-1.3.0-cdh3u1//lib/avro-ipc-1.5.1.jar:/home/hadoop/sqoop-1.3.0-cdh= 3u1//lib/avro-mapred-1.5.1.jar:/home/hadoop/sqoop-1.3.0-cdh3u1//lib/commons= -io-1.4.jar:/home/hadoop/sqoop-1.3.0-cdh3u1//lib/hadoop-mrunit-0.20.2-CDH3b= 2-SNAPSHOT.jar:/home/hadoop/sqoop-1.3.0-cdh3u1//lib/jackson-core-asl-1.7.3.= jar:/home/hadoop/sqoop-1.3.0-cdh3u1//lib/jackson-mapper-asl-1.7.3.jar:/home= /hadoop/sqoop-1.3.0-cdh3u1//lib/jopt-simple-3.2.jar:/home/hadoop/sqoop-1.3.= 0-cdh3u1//lib/paranamer-2.3.jar:/home/hadoop/sqoop-1.3.0-cdh3u1//lib/snappy= -java-1.0.3-rc2.jar:/home/hadoop/sqoop-1.3.0-cdh3u1//lib/sqljdbc4.jar:/home= /hadoop/sqoop-1.3.0-cdh3u1//lib/sqoop-sqlserver-1.0.jar:/home/hadoop/hbase-= 0.90.1-cdh3u0//conf:/usr/lib/jvm/java-6-sun/lib/tools.jar:/home/hadoop/hbas= e-0.90.1-cdh3u0/:/home/hadoop/hbase-0.90.1-cdh3u0//hbase-0.90.1-cdh3u0.jar:= /home/hadoop/hbase-0.90.1-cdh3u0//hbase-0.90.1-cdh3u0-tests.jar:/home/hadoo= p/hbase-0.90.1-cdh3u0//lib/activation-1.1.jar:/home/hadoop/hbase-0.90.1-cdh= 3u0//lib/asm-3.1.jar:/home/hadoop/hbase-0.90.1-cdh3u0//lib/avro-1.3.3.jar:/= home/hadoop/hbase-0.90.1-cdh3u0//lib/commons-cli-1.2.jar:/home/hadoop/hbase= -0.90.1-cdh3u0//lib/commons-codec-1.4.jar:/home/hadoop/hbase-0.90.1-cdh3u0/= /lib/commons-el-1.0.jar:/home/hadoop/hbase-0.90.1-cdh3u0//lib/commons-httpc= lient-3.1.jar:/home/hadoop/hbase-0.90.1-cdh3u0//lib/commons-lang-2.5.jar:/h= ome/hadoop/hbase-0.90.1-cdh3u0//lib/commons-logging-1.1.1.jar:/home/hadoop/= hbase-0.90.1-cdh3u0//lib/commons-net-1.4.1.jar:/home/hadoop/hbase-0.90.1-cd= h3u0//lib/core-3.1.1.jar:/home/hadoop/hbase-0.90.1-cdh3u0//lib/guava-r06.ja= r:/home/hadoop/hbase-0.90.1-cdh3u0//lib/hadoop-core-0.20.2-cdh3u0.jar:/home= /hadoop/hbase-0.90.1-cdh3u0//lib/hbase-0.90.1-cdh3u0.jar:/home/hadoop/hbase= -0.90.1-cdh3u0//lib/jackson-core-asl-1.5.2.jar:/home/hadoop/hbase-0.90.1-cd= h3u0//lib/jackson-jaxrs-1.5.5.jar:/home/hadoop/hbase-0.90.1-cdh3u0//lib/jac= kson-mapper-asl-1.5.2.jar:/home/hadoop/hbase-0.90.1-cdh3u0//lib/jackson-xc-= 1.5.5.jar:/home/hadoop/hbase-0.90.1-cdh3u0//lib/jasper-compiler-5.5.23.jar:= /home/hadoop/hbase-0.90.1-cdh3u0//lib/jasper-runtime-5.5.23.jar:/home/hadoo= p/hbase-0.90.1-cdh3u0//lib/jaxb-api-2.1.jar:/home/hadoop/hbase-0.90.1-cdh3u= 0//lib/jaxb-impl-2.1.12.jar:/home/hadoop/hbase-0.90.1-cdh3u0//lib/jersey-co= re-1.4.jar:/home/hadoop/hbase-0.90.1-cdh3u0//lib/jersey-json-1.4.jar:/home/= hadoop/hbase-0.90.1-cdh3u0//lib/jersey-server-1.4.jar:/home/hadoop/hbase-0.= 90.1-cdh3u0//lib/jettison-1.1.jar:/home/hadoop/hbase-0.90.1-cdh3u0//lib/jet= ty-6.1.26.jar:/home/hadoop/hbase-0.90.1-cdh3u0//lib/jetty-util-6.1.26.jar:/= home/hadoop/hbase-0.90.1-cdh3u0//lib/jruby-complete-1.0.3.jar:/home/hadoop/= hbase-0.90.1-cdh3u0//lib/jsp-2.1-6.1.14.jar:/home/hadoop/hbase-0.90.1-cdh3u= 0//lib/jsp-api-2.1-6.1.14.jar:/home/hadoop/hbase-0.90.1-cdh3u0//lib/jsp-api= -2.1.jar:/home/hadoop/hbase-0.90.1-cdh3u0//lib/jsr311-api-1.1.1.jar:/home/h= adoop/hbase-0.90.1-cdh3u0//lib/log4j-1.2.16.jar:/home/hadoop/hbase-0.90.1-c= dh3u0//lib/protobuf-java-2.3.0.jar:/home/hadoop/hbase-0.90.1-cdh3u0//lib/se= rvlet-api-2.5-6.1.14.jar:/home/hadoop/hbase-0.90.1-cdh3u0//lib/servlet-api-= 2.5.jar:/home/hadoop/hbase-0.90.1-cdh3u0//lib/slf4j-api-1.5.8.jar:/home/had= oop/hbase-0.90.1-cdh3u0//lib/slf4j-log4j12-1.5.8.jar:/home/hadoop/hbase-0.9= 0.1-cdh3u0//lib/stax-api-1.0.1.jar:/home/hadoop/hbase-0.90.1-cdh3u0//lib/th= rift-0.2.0.jar:/home/hadoop/hbase-0.90.1-cdh3u0//lib/xmlenc-0.52.jar:/home/= hadoop/hbase-0.90.1-cdh3u0//lib/zookeeper-3.3.3-cdh3u0.jar:/home/hadoop/sqo= op-1.3.0-cdh3u1//sqoop-1.3.0-cdh3u1.jar:/home/hadoop/sqoop-1.3.0-cdh3u1//sq= oop-test-1.3.0-cdh3u1.jar::/home/hadoop/hadoop-0.20.2-cdh3u2/hadoop-core-0.= 20.2-cdh3u2.jar:/home/hadoop/sqoop-1.3.0-cdh3u1/sqoop-1.3.0-cdh3u1.jar > > > 12/01/31 22:33:42 ERROR orm.CompilationManager: Could not rename > /tmp/sqoop-hadoop/compile/a7d94d7420001a743a4746242116beff/Appointment.ja= va > to /home/hadoop/sqoop-1.3.0-cdh3u1/bin/./Appointment.java > > > java.io.IOException: Destination > '/home/hadoop/sqoop-1.3.0-cdh3u1/bin/./Appointment.java' already exists > > > at org.apache.commons.io.FileUtils.moveFile(FileUtils.java:1811) > > > at > com.cloudera.sqoop.orm.CompilationManager.compile(CompilationManager.java= :227) > > > at > com.cloudera.sqoop.tool.CodeGenTool.generateORM(CodeGenTool.java:83) > > > at > com.cloudera.sqoop.tool.ImportTool.importTable(ImportTool.java:337) > > > at com.cloudera.sqoop.tool.ImportTool.run(ImportTool.java:423) > > > at com.cloudera.sqoop.Sqoop.run(Sqoop.java:144) > > > at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65) > > > at com.cloudera.sqoop.Sqoop.runSqoop(Sqoop.java:180) > > > at com.cloudera.sqoop.Sqoop.runTool(Sqoop.java:219) > > > at com.cloudera.sqoop.Sqoop.runTool(Sqoop.java:228) > > > at com.cloudera.sqoop.Sqoop.main(Sqoop.java:237) > > > 12/01/31 22:33:42 INFO orm.CompilationManager: Writing jar file: > /tmp/sqoop-hadoop/compile/a7d94d7420001a743a4746242116beff/Appointment.ja= r > > > 12/01/31 22:33:42 DEBUG orm.CompilationManager: Scanning for .class > files in directory: > /tmp/sqoop-hadoop/compile/a7d94d7420001a743a4746242116beff > > > 12/01/31 22:33:42 DEBUG orm.CompilationManager: Got classfile: > /tmp/sqoop-hadoop/compile/a7d94d7420001a743a4746242116beff/Appointment.cl= ass > -> Appointment.class > > > 12/01/31 22:33:42 DEBUG orm.CompilationManager: Finished writing jar > file > /tmp/sqoop-hadoop/compile/a7d94d7420001a743a4746242116beff/Appointment.ja= r > > > 12/01/31 22:33:42 INFO mapreduce.ImportJobBase: Beginning import of > Appointment > > > 12/01/31 22:33:42 DEBUG manager.SqlManager: Using fetchSize for next > query: 1000 > > > 12/01/31 22:33:42 INFO manager.SqlManager: Executing SQL statement: > SELECT TOP 1 * FROM [Appointment] > > > 12/01/31 22:33:42 DEBUG mapreduce.DataDrivenImportJob: Using table > class: Appointment > > > 12/01/31 22:33:42 DEBUG mapreduce.DataDrivenImportJob: Using > InputFormat: class com.microsoft.sqoop.SqlServer.MSSQLServerDBInputFormat > > > 12/01/31 22:33:42 DEBUG mapreduce.JobBase: Adding to job classpath: > file:/home/hadoop/sqoop-1.3.0-cdh3u1/sqoop-1.3.0-cdh3u1.jar > > > 12/01/31 22:33:42 DEBUG mapreduce.JobBase: Adding to job classpath: > file:/home/hadoop/sqoop-1.3.0-cdh3u1/lib/sqljdbc4.jar > > > 12/01/31 22:33:42 DEBUG mapreduce.JobBase: Adding to job classpath: > file:/home/hadoop/sqoop-1.3.0-cdh3u1/lib/sqoop-sqlserver-1.0.jar > > > 12/01/31 22:33:42 DEBUG mapreduce.JobBase: Adding to job classpath: > file:/home/hadoop/sqoop-1.3.0-cdh3u1/sqoop-1.3.0-cdh3u1.jar > > > 12/01/31 22:33:42 DEBUG mapreduce.JobBase: Adding to job classpath: > file:/home/hadoop/sqoop-1.3.0-cdh3u1/lib/jopt-simple-3.2.jar > > > 12/01/31 22:33:42 DEBUG mapreduce.JobBase: Adding to job classpath: > file:/home/hadoop/sqoop-1.3.0-cdh3u1/lib/jackson-mapper-asl-1.7.3.jar > > > 12/01/31 22:33:42 DEBUG mapreduce.JobBase: Adding to job classpath: > file:/home/hadoop/sqoop-1.3.0-cdh3u1/lib/snappy-java-1.0.3-rc2.jar > > > 12/01/31 22:33:42 DEBUG mapreduce.JobBase: Adding to job classpath: > file:/home/hadoop/sqoop-1.3.0-cdh3u1/lib/hadoop-mrunit-0.20.2-CDH3b2-SNAP= SHOT.jar > > > 12/01/31 22:33:42 DEBUG mapreduce.JobBase: Adding to job classpath: > file:/home/hadoop/sqoop-1.3.0-cdh3u1/lib/ant-eclipse-1.0-jvm1.2.jar > > > 12/01/31 22:33:42 DEBUG mapreduce.JobBase: Adding to job classpath: > file:/home/hadoop/sqoop-1.3.0-cdh3u1/lib/sqoop-sqlserver-1.0.jar > > > 12/01/31 22:33:42 DEBUG mapreduce.JobBase: Adding to job classpath: > file:/home/hadoop/sqoop-1.3.0-cdh3u1/lib/avro-mapred-1.5.1.jar > > > 12/01/31 22:33:42 DEBUG mapreduce.JobBase: Adding to job classpath: > file:/home/hadoop/sqoop-1.3.0-cdh3u1/lib/avro-1.5.1.jar > > > 12/01/31 22:33:42 DEBUG mapreduce.JobBase: Adding to job classpath: > file:/home/hadoop/sqoop-1.3.0-cdh3u1/lib/avro-ipc-1.5.1.jar > > > 12/01/31 22:33:42 DEBUG mapreduce.JobBase: Adding to job classpath: > file:/home/hadoop/sqoop-1.3.0-cdh3u1/lib/paranamer-2.3.jar > > > 12/01/31 22:33:42 DEBUG mapreduce.JobBase: Adding to job classpath: > file:/home/hadoop/sqoop-1.3.0-cdh3u1/lib/sqljdbc4.jar > > > 12/01/31 22:33:42 DEBUG mapreduce.JobBase: Adding to job classpath: > file:/home/hadoop/sqoop-1.3.0-cdh3u1/lib/jackson-core-asl-1.7.3.jar > > > 12/01/31 22:33:42 DEBUG mapreduce.JobBase: Adding to job classpath: > file:/home/hadoop/sqoop-1.3.0-cdh3u1/lib/commons-io-1.4.jar > > > 12/01/31 22:33:42 DEBUG mapreduce.JobBase: Adding to job classpath: > file:/home/hadoop/sqoop-1.3.0-cdh3u1/lib/ant-contrib-1.0b3.jar > > > 12/01/31 22:33:43 INFO mapred.JobClient: Running job: > job_201201311414_0051 > > > 12/01/31 22:33:44 INFO mapred.JobClient: map 0% reduce 0% > > > 12/01/31 22:33:48 INFO mapred.JobClient: map 100% reduce 0% > > > 12/01/31 22:33:48 INFO mapred.JobClient: Job complete: > job_201201311414_0051 > > > 12/01/31 22:33:48 INFO mapred.JobClient: Counters: 11 > > > 12/01/31 22:33:48 INFO mapred.JobClient: Job Counters > > > 12/01/31 22:33:48 INFO mapred.JobClient: SLOTS_MILLIS_MAPS=3D4152 > > > 12/01/31 22:33:48 INFO mapred.JobClient: Total time spent by all > reduces waiting after reserving slots (ms)=3D0 > > > 12/01/31 22:33:48 INFO mapred.JobClient: Total time spent by all > maps waiting after reserving slots (ms)=3D0 > > > 12/01/31 22:33:48 INFO mapred.JobClient: Launched map tasks=3D1 > > > 12/01/31 22:33:48 INFO mapred.JobClient: SLOTS_MILLIS_REDUCES=3D0 > > > 12/01/31 22:33:48 INFO mapred.JobClient: FileSystemCounters > > > 12/01/31 22:33:48 INFO mapred.JobClient: HDFS_BYTES_READ=3D87 > > > 12/01/31 22:33:48 INFO mapred.JobClient: FILE_BYTES_WRITTEN=3D619= 85 > > > 12/01/31 22:33:48 INFO mapred.JobClient: Map-Reduce Framework > > > 12/01/31 22:33:48 INFO mapred.JobClient: Map input records=3D0 > > > 12/01/31 22:33:48 INFO mapred.JobClient: Spilled Records=3D0 > > > 12/01/31 22:33:48 INFO mapred.JobClient: Map output records=3D0 > > > 12/01/31 22:33:48 INFO mapred.JobClient: SPLIT_RAW_BYTES=3D87 > > > 12/01/31 22:33:48 INFO mapreduce.ImportJobBase: Transferred 0 bytes i= n > 6.2606 seconds (0 bytes/sec) > > > 12/01/31 22:33:48 INFO mapreduce.ImportJobBase: Retrieved 0 records. > > > 12/01/31 22:33:48 INFO hive.HiveImport: Removing temporary files from > import process: Appointment/_logs > > > 12/01/31 22:33:48 INFO hive.HiveImport: Loading uploaded data into Hi= ve > > > 12/01/31 22:33:48 DEBUG hive.HiveImport: Hive.inputTable: Appointment > > > 12/01/31 22:33:48 DEBUG hive.HiveImport: Hive.outputTable: appointmen= t > > > 12/01/31 22:33:48 DEBUG manager.SqlManager: No connection paramenters > specified. Using regular API for making connection. > > > 12/01/31 22:33:48 DEBUG manager.SqlManager: Using fetchSize for next > query: 1000 > > > 12/01/31 22:33:48 INFO manager.SqlManager: Executing SQL statement: > SELECT TOP 1 * FROM [Appointment] > > > 12/01/31 22:33:48 DEBUG manager.SqlManager: Using fetchSize for next > query: 1000 > > > 12/01/31 22:33:48 INFO manager.SqlManager: Executing SQL statement: > SELECT TOP 1 * FROM [Appointment] > > > 12/01/31 22:33:48 WARN hive.TableDefWriter: Column StartTime had to b= e > cast to a less precise type in Hive > > > 12/01/31 22:33:48 WARN hive.TableDefWriter: Column EndTime had to be > cast to a less precise type in Hive > > > 12/01/31 22:33:48 WARN hive.TableDefWriter: Column CreatedDate had to > be cast to a less precise type in Hive > > > 12/01/31 22:33:48 WARN hive.TableDefWriter: Column ModifiedDate had t= o > be cast to a less precise type in Hive > > > 12/01/31 22:33:48 DEBUG hive.TableDefWriter: Create statement: CREATE > TABLE IF NOT EXISTS `appointment` ( `AppointmentUid` STRING, `ExternalID` > STRING, `PatientUid` STRING, `StartTime` STRING, `EndTime` STRING,`Note` > STRING, `AppointmentTypeUid` STRING, `AppointmentStatusUid` STRING, > `CheckOutNote` STRING, `CreatedDate` STRING, `CreatedByUid` STRING) COMME= NT > 'Imported by sqoop on 2012/01/31 22:33:48' ROW FORMAT DELIMITED FIELDS > TERMINATED BY '\001' LINES TERMINATED BY '\012' STORED AS TEXTFILE > > > 12/01/31 22:33:48 DEBUG hive.TableDefWriter: Load statement: LOAD DAT= A > INPATH 'hdfs://localhost:54310/user/hadoop/Appointment' INTO TABLE > `appointment` > > > 12/01/31 22:33:48 DEBUG hive.HiveImport: Using external Hive process. > > > 12/01/31 22:33:50 INFO hive.HiveImport: Hive history > file=3D/tmp/hadoop/hive_job_log_hadoop_201201312233_1008229902.txt > > > 12/01/31 22:33:52 INFO hive.HiveImport: OK > > > 12/01/31 22:33:52 INFO hive.HiveImport: Time taken: 2.006 seconds > > > 12/01/31 22:33:53 INFO hive.HiveImport: Loading data to table > default.appointment > > > 12/01/31 22:33:53 INFO hive.HiveImport: OK > > > 12/01/31 22:33:53 INFO hive.HiveImport: Time taken: 0.665 seconds > > > 12/01/31 22:33:53 INFO hive.HiveImport: Hive import complete. > > > > > > > > > > > > > > > > > > -- > > > Thanks and Regards, > > > Bhavesh Shah > > > > > > > > > > > > On Thu, Feb 2, 2012 at 8:20 PM, Alexis De La Cruz Toledo < > alexisdct@gmail.com> wrote: > > > This is because you need the metastore. > > > If you aren't installed in a databases, > > > it installed with derby in the directory when > > > you access to hive, remember where was it. > > > There you should find the directory name _metastore > > > and in this directory access to hive. > > > > > > Regards. > > > > > > El 2 de febrero de 2012 05:46, Bhavesh Shah >escribi=F3: > > > > > > > Hello all, > > > > > > > > After successfully importing the tables in hive I am not able to se= e > the > > > > table in Hive. > > > > When I imported the table I saw the dir on HDFS (under > > > > /user/hive/warehouse/) but when I execute command in Hive "SHOW > TABLES" > > > > the table is not in the list. > > > > > > > > I find a lot about it but not getting anything. > > > > Pls suggest me some solution for it. > > > > > > > > > > > > > > > > > > > > -- > > > > Thanks and Regards, > > > > Bhavesh Shah > > > > > > > > > > > > > > > > -- > > > Ing. Alexis de la Cruz Toledo. > > > *Av. Instituto Polit=E9cnico Nacional No. 2508 Col. San Pedro Zacaten= co. > M=E9xico, > > > D.F, 07360 * > > > *CINVESTAV, DF.* > > > > > > > > > > > > > > > > > > > > > -- > > > Regards, > > > Bhavesh Shah > > > > > > > > > > > > > - > > --00248c768f26e3962004b80b281c Content-Type: text/html; charset=ISO-8859-1 Content-Transfer-Encoding: quoted-printable Hello Alex,
I have checked the rights and i am using same user for impor= ting too.
Do I need to=A0 install the hive again so that I cam=3Dn solve= my problem??

--
Regards,
Bhavesh Shah


On Fri, Feb 3, 2012 at 1:45 PM, alo alt <wget.null@googlemail.com> wrot= e:
check the hdfs for the rights:
hadoop dfs -ls /path/

the config looks okay, so I assume that some tables was created in hue with= other rights (rw hue user, r for all other). That can you check with -ls o= r in the WebUI -> browse filesystem -> click trough /hive/warehouse
use the same user for import, operations and hue. Or enable kerberos auth ;= )

best,
=A0Alex

--
Alexander Lorenz
http://mapredit.= blogspot.com

On Feb 3, 2012, at 9:09 AM, B= havesh Shah wrote:

> Hello Alex,
> Thanks for your reply.
> I have observed one thing this thing is happening with some tables onl= y. While some tables import with the complete data while some not.
> But the issue is that though import completely or not their entry is n= ot listed in "SHOW TABLE" command.
>
> Why this is happening I am not getting.
> Is there any problem in configuration?
>
>
>
> -
> Thanks and Regards,
> Bhavesh Shah
>
>
>
>
> On Fri, Feb 3, 2012 at 1:34 PM, alo alt <wget.null@googlemail.com> wrote:
> 0 records exported, so the table will be not created since they have n= o data. Also check the file:
> > /java.io.IOException: Destination '/home/hadoop/sqoop-1.3.0-c= dh3u1/bin/./Appointment.java' already exists
>
> sqoop will move it, but it still exists.
>
> - Alex
>
> --
> Alexander Lorenz
> http://mapr= edit.blogspot.com
>
> On Feb 3, 2012, at 6:22 AM, Bhavesh Shah wrote:
>
> >
> >
> > ---------- Forwarded message ----------
> > From: Bhavesh Shah <bhavesh25shah@gmail.com>
> > Date: Fri, Feb 3, 2012 at 10:38 AM
> > Subject: Re: Table not creating in hive
> > To: dev@hive.apache.org, sqoop-user@incubator.= apache.org
> >
> >
> > Hello Bejoy & Alexis,
> > Thanks for your reply.
> > I am using mysql as a database (and not derby)
> > Previuosly I am using --split by 1 and is working fine, but when = I installed MySQL and change the database then I got the error for --split-= by option and thats why I use -m 1.
> > But again due to that it is showing that data retrieve is 0.
> >
> > Here are the logs.
> > hadoop@ubuntu:~/sqoop-1.3.0-cdh3u1/bin$ ./sqoop-import =A0--conne= ct 'jdbc:sqlserver://1= 92.168.1.1;username=3Dabcd;password=3D12345;database=3DFIGMDHadoopTest&= #39; --table Appointment --hive-table appointment -m 1 --hive-import --verb= ose
> >
> > 12/01/31 22:33:40 DEBUG tool.BaseSqoopTool: Enabled debug logging= .
> > 12/01/31 22:33:40 INFO tool.BaseSqoopTool: Using Hive-specific de= limiters for output. You can override
> > 12/01/31 22:33:40 INFO tool.BaseSqoopTool: delimiters with --fiel= ds-terminated-by, etc.
> > 12/01/31 22:33:40 DEBUG sqoop.ConnFactory: Added factory com.micr= osoft.sqoop.SqlServer.MSSQLServerManagerFactory specified by /home/hadoop/s= qoop-1.3.0-cdh3u1/conf/managers.d/mssqoop-sqlserver
> > 12/01/31 22:33:40 DEBUG sqoop.ConnFactory: Loaded manager factory= : com.microsoft.sqoop.SqlServer.MSSQLServerManagerFactory
> > 12/01/31 22:33:40 DEBUG sqoop.ConnFactory: Loaded manager factory= : com.cloudera.sqoop.manager.DefaultManagerFactory
> > 12/01/31 22:33:40 DEBUG sqoop.ConnFactory: Trying ManagerFactory:= com.microsoft.sqoop.SqlServer.MSSQLServerManagerFactory
> > 12/01/31 22:33:40 INFO SqlServer.MSSQLServerManagerFactory: Using= Microsoft's SQL Server - Hadoop Connector
> > 12/01/31 22:33:40 INFO manager.SqlManager: Using default fetchSiz= e of 1000
> > 12/01/31 22:33:40 DEBUG sqoop.ConnFactory: Instantiated ConnManag= er com.microsoft.sqoop.SqlServer.MSSQLServerManager@116471f
> > 12/01/31 22:33:40 INFO tool.CodeGenTool: Beginning code generatio= n
> > 12/01/31 22:33:40 DEBUG manager.SqlManager: No connection paramen= ters specified. Using regular API for making connection.
> > 12/01/31 22:33:41 DEBUG manager.SqlManager: Using fetchSize for n= ext query: 1000
> > 12/01/31 22:33:41 INFO manager.SqlManager: Executing SQL statemen= t: SELECT TOP 1 * FROM [Appointment]
> > 12/01/31 22:33:41 DEBUG manager.SqlManager: Using fetchSize for n= ext query: 1000
> > 12/01/31 22:33:41 INFO manager.SqlManager: Executing SQL statemen= t: SELECT TOP 1 * FROM [Appointment]
> > 12/01/31 22:33:41 DEBUG orm.ClassWriter: selected columns:
> > 12/01/31 22:33:41 DEBUG orm.ClassWriter: =A0 AppointmentUid
> > 12/01/31 22:33:41 DEBUG orm.ClassWriter: =A0 ExternalID
> > 12/01/31 22:33:41 DEBUG orm.ClassWriter: =A0 PatientUid
> > 12/01/31 22:33:41 DEBUG orm.ClassWriter: =A0 StartTime
> > 12/01/31 22:33:41 DEBUG orm.ClassWriter: =A0 EndTime
> > 12/01/31 22:33:41 DEBUG orm.ClassWriter: =A0 ResourceUid
> > 12/01/31 22:33:41 DEBUG orm.ClassWriter: =A0 Note
> > 12/01/31 22:33:41 DEBUG orm.ClassWriter: =A0 AppointmentTypeUid > > 12/01/31 22:33:41 DEBUG orm.ClassWriter: =A0 AppointmentStatusUid=
> > 12/01/31 22:33:41 DEBUG orm.ClassWriter: =A0 CheckOutNote
> > 12/01/31 22:33:41 DEBUG orm.ClassWriter: =A0 CreatedDate
> > 12/01/31 22:33:41 DEBUG orm.ClassWriter: =A0 CreatedByUid
> > 12/01/31 22:33:41 DEBUG orm.ClassWriter: Writing source file: /tm= p/sqoop-hadoop/compile/a7d94d7420001a743a4746242116beff/Appointment.java > > 12/01/31 22:33:41 DEBUG orm.ClassWriter: Table name: Appointment<= br> > > 12/01/31 22:33:41 DEBUG orm.ClassWriter: Columns: AppointmentUid:= 1, ExternalID:12, PatientUid:1, StartTime:93, EndTime:93, ResourceUid:1, Re= nderringProviderUid:1, ReferringProviderUid:1, ServiceLocationUid:1, Note:-= 1, AppointmentTypeUid:1, AppointmentStatusUid:1, CheckOutNote:12, ROSxml:-1= , CreatedDate:93, CreatedByUid:1, ModifiedDate:93, ModifiedByUid:1, SingleD= ayAppointmentGroupUid:1, MultiDayAppointmentGroupUid:1,
> > 12/01/31 22:33:41 DEBUG orm.ClassWriter: sourceFilename is Appoin= tment.java
> > 12/01/31 22:33:41 DEBUG orm.CompilationManager: Found existing /t= mp/sqoop-hadoop/compile/a7d94d7420001a743a4746242116beff/
> > 12/01/31 22:33:41 INFO orm.CompilationManager: HADOOP_HOME is /ho= me/hadoop/hadoop-0.20.2-cdh3u2
> > 12/01/31 22:33:41 DEBUG orm.CompilationManager: Adding source fil= e: /tmp/sqoop-hadoop/compile/a7d94d7420001a743a4746242116beff/Appointment.j= ava
> > 12/01/31 22:33:41 DEBUG orm.CompilationManager: Invoking javac wi= th args:
> > 12/01/31 22:33:41 DEBUG orm.CompilationManager: =A0 -sourcepath > > 12/01/31 22:33:41 DEBUG orm.CompilationManager: =A0 /tmp/sqoop-ha= doop/compile/a7d94d7420001a743a4746242116beff/
> > 12/01/31 22:33:41 DEBUG orm.CompilationManager: =A0 -d
> > 12/01/31 22:33:41 DEBUG orm.CompilationManager: =A0 /tmp/sqoop-ha= doop/compile/a7d94d7420001a743a4746242116beff/
> > 12/01/31 22:33:41 DEBUG orm.CompilationManager: =A0 -classpath > > 12/01/31 22:33:41 DEBUG orm.CompilationManager: =A0 /home/hadoop/= hadoop-0.20.2-cdh3u2//conf:/usr/lib/jvm/java-6-sun-1.6.0.26//lib/tools.jar:= /home/hadoop/hadoop-0.20.2-cdh3u2/:/home/hadoop/hadoop-0.20.2-cdh3u2//hadoo= p-core-0.20.2-cdh3u2.jar:/home/hadoop/hadoop-0.20.2-cdh3u2//lib/ant-contrib= -1.0b3.jar:/home/hadoop/hadoop-0.20.2-cdh3u2//lib/aspectjrt-1.6.5.jar:/home= /hadoop/hadoop-0.20.2-cdh3u2//lib/aspectjtools-1.6.5.jar:/home/hadoop/hadoo= p-0.20.2-cdh3u2//lib/commons-cli-1.2.jar:/home/hadoop/hadoop-0.20.2-cdh3u2/= /lib/commons-codec-1.4.jar:/home/hadoop/hadoop-0.20.2-cdh3u2//lib/commons-d= aemon-1.0.1.jar:/home/hadoop/hadoop-0.20.2-cdh3u2//lib/commons-el-1.0.jar:/= home/hadoop/hadoop-0.20.2-cdh3u2//lib/commons-httpclient-3.1.jar:/home/hado= op/hadoop-0.20.2-cdh3u2//lib/commons-logging-1.0.4.jar:/home/hadoop/hadoop-= 0.20.2-cdh3u2//lib/commons-logging-api-1.0.4.jar:/home/hadoop/hadoop-0.20.2= -cdh3u2//lib/commons-net-1.4.1.jar:/home/hadoop/hadoop-0.20.2-cdh3u2//lib/c= ore-3.1.1.jar:/home/hadoop/hadoop-0.20.2-cdh3u2//lib/hadoop-fairscheduler-0= .20.2-cdh3u2.jar:/home/hadoop/hadoop-0.20.2-cdh3u2//lib/hsqldb-1.8.0.10.jar= :/home/hadoop/hadoop-0.20.2-cdh3u2//lib/jackson-core-asl-1.5.2.jar:/home/ha= doop/hadoop-0.20.2-cdh3u2//lib/jackson-mapper-asl-1.5.2.jar:/home/hadoop/ha= doop-0.20.2-cdh3u2//lib/jasper-compiler-5.5.12.jar:/home/hadoop/hadoop-0.20= .2-cdh3u2//lib/jasper-runtime-5.5.12.jar:/home/hadoop/hadoop-0.20.2-cdh3u2/= /lib/jets3t-0.6.1.jar:/home/hadoop/hadoop-0.20.2-cdh3u2//lib/jetty-6.1.26.c= loudera.1.jar:/home/hadoop/hadoop-0.20.2-cdh3u2//lib/jetty-servlet-tester-6= .1.26.cloudera.1.jar:/home/hadoop/hadoop-0.20.2-cdh3u2//lib/jetty-util-6.1.= 26.cloudera.1.jar:/home/hadoop/hadoop-0.20.2-cdh3u2//lib/jsch-0.1.42.jar:/h= ome/hadoop/hadoop-0.20.2-cdh3u2//lib/junit-4.5.jar:/home/hadoop/hadoop-0.20= .2-cdh3u2//lib/kfs-0.2.2.jar:/home/hadoop/hadoop-0.20.2-cdh3u2//lib/libfb30= 3.jar:/home/hadoop/hadoop-0.20.2-cdh3u2//lib/libthrift.jar:/home/hadoop/had= oop-0.20.2-cdh3u2//lib/log4j-1.2.15.jar:/home/hadoop/hadoop-0.20.2-cdh3u2//= lib/mockito-all-1.8.2.jar:/home/hadoop/hadoop-0.20.2-cdh3u2//lib/oro-2.0.8.= jar:/home/hadoop/hadoop-0.20.2-cdh3u2//lib/servlet-api-2.5-20081211.jar:/ho= me/hadoop/hadoop-0.20.2-cdh3u2//lib/servlet-api-2.5-6.1.14.jar:/home/hadoop= /hadoop-0.20.2-cdh3u2//lib/slf4j-api-1.4.3.jar:/home/hadoop/hadoop-0.20.2-c= dh3u2//lib/slf4j-log4j12-1.4.3.jar:/home/hadoop/hadoop-0.20.2-cdh3u2//lib/x= mlenc-0.52.jar:/home/hadoop/hadoop-0.20.2-cdh3u2//lib/jsp-2.1/jsp-2.1.jar:/= home/hadoop/hadoop-0.20.2-cdh3u2//lib/jsp-2.1/jsp-api-2.1.jar:/home/hadoop/= sqoop-1.3.0-cdh3u1/conf/::/home/hadoop/sqoop-1.3.0-cdh3u1//lib/ant-contrib-= 1.0b3.jar:/home/hadoop/sqoop-1.3.0-cdh3u1//lib/ant-eclipse-1.0-jvm1.2.jar:/= home/hadoop/sqoop-1.3.0-cdh3u1//lib/avro-1.5.1.jar:/home/hadoop/sqoop-1.3.0= -cdh3u1//lib/avro-ipc-1.5.1.jar:/home/hadoop/sqoop-1.3.0-cdh3u1//lib/avro-m= apred-1.5.1.jar:/home/hadoop/sqoop-1.3.0-cdh3u1//lib/commons-io-1.4.jar:/ho= me/hadoop/sqoop-1.3.0-cdh3u1//lib/hadoop-mrunit-0.20.2-CDH3b2-SNAPSHOT.jar:= /home/hadoop/sqoop-1.3.0-cdh3u1//lib/jackson-core-asl-1.7.3.jar:/home/hadoo= p/sqoop-1.3.0-cdh3u1//lib/jackson-mapper-asl-1.7.3.jar:/home/hadoop/sqoop-1= .3.0-cdh3u1//lib/jopt-simple-3.2.jar:/home/hadoop/sqoop-1.3.0-cdh3u1//lib/p= aranamer-2.3.jar:/home/hadoop/sqoop-1.3.0-cdh3u1//lib/snappy-java-1.0.3-rc2= .jar:/home/hadoop/sqoop-1.3.0-cdh3u1//lib/sqljdbc4.jar:/home/hadoop/sqoop-1= .3.0-cdh3u1//lib/sqoop-sqlserver-1.0.jar:/home/hadoop/hbase-0.90.1-cdh3u0//= conf:/usr/lib/jvm/java-6-sun/lib/tools.jar:/home/hadoop/hbase-0.90.1-cdh3u0= /:/home/hadoop/hbase-0.90.1-cdh3u0//hbase-0.90.1-cdh3u0.jar:/home/hadoop/hb= ase-0.90.1-cdh3u0//hbase-0.90.1-cdh3u0-tests.jar:/home/hadoop/hbase-0.90.1-= cdh3u0//lib/activation-1.1.jar:/home/hadoop/hbase-0.90.1-cdh3u0//lib/asm-3.= 1.jar:/home/hadoop/hbase-0.90.1-cdh3u0//lib/avro-1.3.3.jar:/home/hadoop/hba= se-0.90.1-cdh3u0//lib/commons-cli-1.2.jar:/home/hadoop/hbase-0.90.1-cdh3u0/= /lib/commons-codec-1.4.jar:/home/hadoop/hbase-0.90.1-cdh3u0//lib/commons-el= -1.0.jar:/home/hadoop/hbase-0.90.1-cdh3u0//lib/commons-httpclient-3.1.jar:/= home/hadoop/hbase-0.90.1-cdh3u0//lib/commons-lang-2.5.jar:/home/hadoop/hbas= e-0.90.1-cdh3u0//lib/commons-logging-1.1.1.jar:/home/hadoop/hbase-0.90.1-cd= h3u0//lib/commons-net-1.4.1.jar:/home/hadoop/hbase-0.90.1-cdh3u0//lib/core-= 3.1.1.jar:/home/hadoop/hbase-0.90.1-cdh3u0//lib/guava-r06.jar:/home/hadoop/= hbase-0.90.1-cdh3u0//lib/hadoop-core-0.20.2-cdh3u0.jar:/home/hadoop/hbase-0= .90.1-cdh3u0//lib/hbase-0.90.1-cdh3u0.jar:/home/hadoop/hbase-0.90.1-cdh3u0/= /lib/jackson-core-asl-1.5.2.jar:/home/hadoop/hbase-0.90.1-cdh3u0//lib/jacks= on-jaxrs-1.5.5.jar:/home/hadoop/hbase-0.90.1-cdh3u0//lib/jackson-mapper-asl= -1.5.2.jar:/home/hadoop/hbase-0.90.1-cdh3u0//lib/jackson-xc-1.5.5.jar:/home= /hadoop/hbase-0.90.1-cdh3u0//lib/jasper-compiler-5.5.23.jar:/home/hadoop/hb= ase-0.90.1-cdh3u0//lib/jasper-runtime-5.5.23.jar:/home/hadoop/hbase-0.90.1-= cdh3u0//lib/jaxb-api-2.1.jar:/home/hadoop/hbase-0.90.1-cdh3u0//lib/jaxb-imp= l-2.1.12.jar:/home/hadoop/hbase-0.90.1-cdh3u0//lib/jersey-core-1.4.jar:/hom= e/hadoop/hbase-0.90.1-cdh3u0//lib/jersey-json-1.4.jar:/home/hadoop/hbase-0.= 90.1-cdh3u0//lib/jersey-server-1.4.jar:/home/hadoop/hbase-0.90.1-cdh3u0//li= b/jettison-1.1.jar:/home/hadoop/hbase-0.90.1-cdh3u0//lib/jetty-6.1.26.jar:/= home/hadoop/hbase-0.90.1-cdh3u0//lib/jetty-util-6.1.26.jar:/home/hadoop/hba= se-0.90.1-cdh3u0//lib/jruby-complete-1.0.3.jar:/home/hadoop/hbase-0.90.1-cd= h3u0//lib/jsp-2.1-6.1.14.jar:/home/hadoop/hbase-0.90.1-cdh3u0//lib/jsp-api-= 2.1-6.1.14.jar:/home/hadoop/hbase-0.90.1-cdh3u0//lib/jsp-api-2.1.jar:/home/= hadoop/hbase-0.90.1-cdh3u0//lib/jsr311-api-1.1.1.jar:/home/hadoop/hbase-0.9= 0.1-cdh3u0//lib/log4j-1.2.16.jar:/home/hadoop/hbase-0.90.1-cdh3u0//lib/prot= obuf-java-2.3.0.jar:/home/hadoop/hbase-0.90.1-cdh3u0//lib/servlet-api-2.5-6= .1.14.jar:/home/hadoop/hbase-0.90.1-cdh3u0//lib/servlet-api-2.5.jar:/home/h= adoop/hbase-0.90.1-cdh3u0//lib/slf4j-api-1.5.8.jar:/home/hadoop/hbase-0.90.= 1-cdh3u0//lib/slf4j-log4j12-1.5.8.jar:/home/hadoop/hbase-0.90.1-cdh3u0//lib= /stax-api-1.0.1.jar:/home/hadoop/hbase-0.90.1-cdh3u0//lib/thrift-0.2.0.jar:= /home/hadoop/hbase-0.90.1-cdh3u0//lib/xmlenc-0.52.jar:/home/hadoop/hbase-0.= 90.1-cdh3u0//lib/zookeeper-3.3.3-cdh3u0.jar:/home/hadoop/sqoop-1.3.0-cdh3u1= //sqoop-1.3.0-cdh3u1.jar:/home/hadoop/sqoop-1.3.0-cdh3u1//sqoop-test-1.3.0-= cdh3u1.jar::/home/hadoop/hadoop-0.20.2-cdh3u2/hadoop-core-0.20.2-cdh3u2.jar= :/home/hadoop/sqoop-1.3.0-cdh3u1/sqoop-1.3.0-cdh3u1.jar
> > 12/01/31 22:33:42 ERROR orm.CompilationManager: Could not rename = /tmp/sqoop-hadoop/compile/a7d94d7420001a743a4746242116beff/Appointment.java= to /home/hadoop/sqoop-1.3.0-cdh3u1/bin/./Appointment.java
> > java.io.IOException: Destination '/home/hadoop/sqoop-1.3.0-cd= h3u1/bin/./Appointment.java' already exists
> > =A0 =A0 at org.apache.commons.io.FileUtils.moveFile(FileUtils.jav= a:1811)
> > =A0 =A0 at com.cloudera.sqoop.orm.CompilationManager.compile(Comp= ilationManager.java:227)
> > =A0 =A0 at com.cloudera.sqoop.tool.CodeGenTool.generateORM(CodeGe= nTool.java:83)
> > =A0 =A0 at com.cloudera.sqoop.tool.ImportTool.importTable(ImportT= ool.java:337)
> > =A0 =A0 at com.cloudera.sqoop.tool.ImportTool.run(ImportTool.java= :423)
> > =A0 =A0 at com.cloudera.sqoop.Sqoop.run(Sqoop.java:144)
> > =A0 =A0 at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:= 65)
> > =A0 =A0 at com.cloudera.sqoop.Sqoop.runSqoop(Sqoop.java:180)
> > =A0 =A0 at com.cloudera.sqoop.Sqoop.runTool(Sqoop.java:219)
> > =A0 =A0 at com.cloudera.sqoop.Sqoop.runTool(Sqoop.java:228)
> > =A0 =A0 at com.cloudera.sqoop.Sqoop.main(Sqoop.java:237)
> > 12/01/31 22:33:42 INFO orm.CompilationManager: Writing jar file: = /tmp/sqoop-hadoop/compile/a7d94d7420001a743a4746242116beff/Appointment.jar<= br> > > 12/01/31 22:33:42 DEBUG orm.CompilationManager: Scanning for .cla= ss files in directory: /tmp/sqoop-hadoop/compile/a7d94d7420001a743a47462421= 16beff
> > 12/01/31 22:33:42 DEBUG orm.CompilationManager: Got classfile: /t= mp/sqoop-hadoop/compile/a7d94d7420001a743a4746242116beff/Appointment.class = -> Appointment.class
> > 12/01/31 22:33:42 DEBUG orm.CompilationManager: Finished writing = jar file /tmp/sqoop-hadoop/compile/a7d94d7420001a743a4746242116beff/Appoint= ment.jar
> > 12/01/31 22:33:42 INFO mapreduce.ImportJobBase: Beginning import = of Appointment
> > 12/01/31 22:33:42 DEBUG manager.SqlManager: Using fetchSize for n= ext query: 1000
> > 12/01/31 22:33:42 INFO manager.SqlManager: Executing SQL statemen= t: SELECT TOP 1 * FROM [Appointment]
> > 12/01/31 22:33:42 DEBUG mapreduce.DataDrivenImportJob: Using tabl= e class: Appointment
> > 12/01/31 22:33:42 DEBUG mapreduce.DataDrivenImportJob: Using Inpu= tFormat: class com.microsoft.sqoop.SqlServer.MSSQLServerDBInputFormat
> > 12/01/31 22:33:42 DEBUG mapreduce.JobBase: Adding to job classpat= h: file:/home/hadoop/sqoop-1.3.0-cdh3u1/sqoop-1.3.0-cdh3u1.jar
> > 12/01/31 22:33:42 DEBUG mapreduce.JobBase: Adding to job classpat= h: file:/home/hadoop/sqoop-1.3.0-cdh3u1/lib/sqljdbc4.jar
> > 12/01/31 22:33:42 DEBUG mapreduce.JobBase: Adding to job classpat= h: file:/home/hadoop/sqoop-1.3.0-cdh3u1/lib/sqoop-sqlserver-1.0.jar
> > 12/01/31 22:33:42 DEBUG mapreduce.JobBase: Adding to job classpat= h: file:/home/hadoop/sqoop-1.3.0-cdh3u1/sqoop-1.3.0-cdh3u1.jar
> > 12/01/31 22:33:42 DEBUG mapreduce.JobBase: Adding to job classpat= h: file:/home/hadoop/sqoop-1.3.0-cdh3u1/lib/jopt-simple-3.2.jar
> > 12/01/31 22:33:42 DEBUG mapreduce.JobBase: Adding to job classpat= h: file:/home/hadoop/sqoop-1.3.0-cdh3u1/lib/jackson-mapper-asl-1.7.3.jar > > 12/01/31 22:33:42 DEBUG mapreduce.JobBase: Adding to job classpat= h: file:/home/hadoop/sqoop-1.3.0-cdh3u1/lib/snappy-java-1.0.3-rc2.jar
> > 12/01/31 22:33:42 DEBUG mapreduce.JobBase: Adding to job classpat= h: file:/home/hadoop/sqoop-1.3.0-cdh3u1/lib/hadoop-mrunit-0.20.2-CDH3b2-SNA= PSHOT.jar
> > 12/01/31 22:33:42 DEBUG mapreduce.JobBase: Adding to job classpat= h: file:/home/hadoop/sqoop-1.3.0-cdh3u1/lib/ant-eclipse-1.0-jvm1.2.jar
> > 12/01/31 22:33:42 DEBUG mapreduce.JobBase: Adding to job classpat= h: file:/home/hadoop/sqoop-1.3.0-cdh3u1/lib/sqoop-sqlserver-1.0.jar
> > 12/01/31 22:33:42 DEBUG mapreduce.JobBase: Adding to job classpat= h: file:/home/hadoop/sqoop-1.3.0-cdh3u1/lib/avro-mapred-1.5.1.jar
> > 12/01/31 22:33:42 DEBUG mapreduce.JobBase: Adding to job classpat= h: file:/home/hadoop/sqoop-1.3.0-cdh3u1/lib/avro-1.5.1.jar
> > 12/01/31 22:33:42 DEBUG mapreduce.JobBase: Adding to job classpat= h: file:/home/hadoop/sqoop-1.3.0-cdh3u1/lib/avro-ipc-1.5.1.jar
> > 12/01/31 22:33:42 DEBUG mapreduce.JobBase: Adding to job classpat= h: file:/home/hadoop/sqoop-1.3.0-cdh3u1/lib/paranamer-2.3.jar
> > 12/01/31 22:33:42 DEBUG mapreduce.JobBase: Adding to job classpat= h: file:/home/hadoop/sqoop-1.3.0-cdh3u1/lib/sqljdbc4.jar
> > 12/01/31 22:33:42 DEBUG mapreduce.JobBase: Adding to job classpat= h: file:/home/hadoop/sqoop-1.3.0-cdh3u1/lib/jackson-core-asl-1.7.3.jar
> > 12/01/31 22:33:42 DEBUG mapreduce.JobBase: Adding to job classpat= h: file:/home/hadoop/sqoop-1.3.0-cdh3u1/lib/commons-io-1.4.jar
> > 12/01/31 22:33:42 DEBUG mapreduce.JobBase: Adding to job classpat= h: file:/home/hadoop/sqoop-1.3.0-cdh3u1/lib/ant-contrib-1.0b3.jar
> > 12/01/31 22:33:43 INFO mapred.JobClient: Running job: job_2012013= 11414_0051
> > 12/01/31 22:33:44 INFO mapred.JobClient: =A0map 0% reduce 0%
> > 12/01/31 22:33:48 INFO mapred.JobClient: =A0map 100% reduce 0% > > 12/01/31 22:33:48 INFO mapred.JobClient: Job complete: job_201201= 311414_0051
> > 12/01/31 22:33:48 INFO mapred.JobClient: Counters: 11
> > 12/01/31 22:33:48 INFO mapred.JobClient: =A0 Job Counters
> > 12/01/31 22:33:48 INFO mapred.JobClient: =A0 =A0 SLOTS_MILLIS_MAP= S=3D4152
> > 12/01/31 22:33:48 INFO mapred.JobClient: =A0 =A0 Total time spent= by all reduces waiting after reserving slots (ms)=3D0
> > 12/01/31 22:33:48 INFO mapred.JobClient: =A0 =A0 Total time spent= by all maps waiting after reserving slots (ms)=3D0
> > 12/01/31 22:33:48 INFO mapred.JobClient: =A0 =A0 Launched map tas= ks=3D1
> > 12/01/31 22:33:48 INFO mapred.JobClient: =A0 =A0 SLOTS_MILLIS_RED= UCES=3D0
> > 12/01/31 22:33:48 INFO mapred.JobClient: =A0 FileSystemCounters > > 12/01/31 22:33:48 INFO mapred.JobClient: =A0 =A0 HDFS_BYTES_READ= =3D87
> > 12/01/31 22:33:48 INFO mapred.JobClient: =A0 =A0 FILE_BYTES_WRITT= EN=3D61985
> > 12/01/31 22:33:48 INFO mapred.JobClient: =A0 Map-Reduce Framework=
> > 12/01/31 22:33:48 INFO mapred.JobClient: =A0 =A0 Map input record= s=3D0
> > 12/01/31 22:33:48 INFO mapred.JobClient: =A0 =A0 Spilled Records= =3D0
> > 12/01/31 22:33:48 INFO mapred.JobClient: =A0 =A0 Map output recor= ds=3D0
> > 12/01/31 22:33:48 INFO mapred.JobClient: =A0 =A0 SPLIT_RAW_BYTES= =3D87
> > 12/01/31 22:33:48 INFO mapreduce.ImportJobBase: Transferred 0 byt= es in 6.2606 seconds (0 bytes/sec)
> > 12/01/31 22:33:48 INFO mapreduce.ImportJobBase: Retrieved 0 recor= ds.
> > 12/01/31 22:33:48 INFO hive.HiveImport: Removing temporary files = from import process: Appointment/_logs
> > 12/01/31 22:33:48 INFO hive.HiveImport: Loading uploaded data int= o Hive
> > 12/01/31 22:33:48 DEBUG hive.HiveImport: Hive.inputTable: Appoint= ment
> > 12/01/31 22:33:48 DEBUG hive.HiveImport: Hive.outputTable: appoin= tment
> > 12/01/31 22:33:48 DEBUG manager.SqlManager: No connection paramen= ters specified. Using regular API for making connection.
> > 12/01/31 22:33:48 DEBUG manager.SqlManager: Using fetchSize for n= ext query: 1000
> > 12/01/31 22:33:48 INFO manager.SqlManager: Executing SQL statemen= t: SELECT TOP 1 * FROM [Appointment]
> > 12/01/31 22:33:48 DEBUG manager.SqlManager: Using fetchSize for n= ext query: 1000
> > 12/01/31 22:33:48 INFO manager.SqlManager: Executing SQL statemen= t: SELECT TOP 1 * FROM [Appointment]
> > 12/01/31 22:33:48 WARN hive.TableDefWriter: Column StartTime had = to be cast to a less precise type in Hive
> > 12/01/31 22:33:48 WARN hive.TableDefWriter: Column EndTime had to= be cast to a less precise type in Hive
> > 12/01/31 22:33:48 WARN hive.TableDefWriter: Column CreatedDate ha= d to be cast to a less precise type in Hive
> > 12/01/31 22:33:48 WARN hive.TableDefWriter: Column ModifiedDate h= ad to be cast to a less precise type in Hive
> > 12/01/31 22:33:48 DEBUG hive.TableDefWriter: Create statement: CR= EATE TABLE IF NOT EXISTS `appointment` ( `AppointmentUid` STRING, `External= ID` STRING, `PatientUid` STRING, `StartTime` STRING, `EndTime` STRING,`Note= ` STRING, `AppointmentTypeUid` STRING, `AppointmentStatusUid` STRING, `Chec= kOutNote` STRING, `CreatedDate` STRING, `CreatedByUid` STRING) COMMENT '= ;Imported by sqoop on 2012/01/31 22:33:48' ROW FORMAT DELIMITED FIELDS = TERMINATED BY '\001' LINES TERMINATED BY '\012' STORED AS T= EXTFILE
> > 12/01/31 22:33:48 DEBUG hive.TableDefWriter: Load statement: LOAD= DATA INPATH 'hdfs://localhost:54310/user/hadoop/Appointment' INTO = TABLE `appointment`
> > 12/01/31 22:33:48 DEBUG hive.HiveImport: Using external Hive proc= ess.
> > 12/01/31 22:33:50 INFO hive.HiveImport: Hive history file=3D/tmp/= hadoop/hive_job_log_hadoop_201201312233_1008229902.txt
> > 12/01/31 22:33:52 INFO hive.HiveImport: OK
> > 12/01/31 22:33:52 INFO hive.HiveImport: Time taken: 2.006 seconds=
> > 12/01/31 22:33:53 INFO hive.HiveImport: Loading data to table def= ault.appointment
> > 12/01/31 22:33:53 INFO hive.HiveImport: OK
> > 12/01/31 22:33:53 INFO hive.HiveImport: Time taken: 0.665 seconds=
> > 12/01/31 22:33:53 INFO hive.HiveImport: Hive import complete.
> >
> >
> >
> >
> >
> > --
> > Thanks and Regards,
> > Bhavesh Shah
> >
> >
> >
> > On Thu, Feb 2, 2012 at 8:20 PM, Alexis De La Cruz Toledo <alexisdct@gmail.com> wrote:
> > This is because you need the metastore.
> > If you aren't installed in a databases,
> > it installed with derby in the directory when
> > you access to hive, remember where was it.
> > There you should find the directory name _metastore
> > and in this directory access to hive.
> >
> > Regards.
> >
> > El 2 de febrero de 2012 05:46, Bhavesh Shah <bhavesh25shah@gmail.com>escribi=F3:
> >
> > > Hello all,
> > >
> > > After successfully importing the tables in hive I am not abl= e to see the
> > > table in Hive.
> > > When I imported the table I saw the dir on HDFS (under
> > > /user/hive/warehouse/) but when I execute command in Hive &q= uot;SHOW TABLES"
> > > the table is not in the list.
> > >
> > > I find a lot about it but not getting anything.
> > > Pls suggest me some solution for it.
> > >
> > >
> > >
> > >
> > > --
> > > Thanks and Regards,
> > > Bhavesh Shah
> > >
> >
> >
> >
> > --
> > Ing. Alexis de la Cruz Toledo.
> > *Av. Instituto Polit=E9cnico Nacional No. 2508 Col. San Pedro Zac= atenco. M=E9xico,
> > D.F, 07360 *
> > *CINVESTAV, DF.*
> >
> >
> >
> >
> >
> >
> > --
> > Regards,
> > Bhavesh Shah
> >
>
>
>
>
> -





--00248c768f26e3962004b80b281c--