sqoop-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Bipin Nag <bipin....@gmail.com>
Subject Re: Sqoop Import: Error / by zero at org.apache.sqoop.connector.jdbc.GenericJdbcPartitioner
Date Fri, 10 Apr 2015 06:02:57 GMT
I have reported the bug : https://issues.apache.org/jira/browse/SQOOP-2292

On 8 April 2015 at 22:37, Abraham Elmahrek <abe@cloudera.com> wrote:

> It seems like this is a special case with extractors being set to 1 and
> "Null value allowed for the partition column" and integer partition columns
> in the Generic JDBC connector.
>
> Could you file a bug at https://issues.apache.org/jira/browse/SQOOP/ ?
>
> On Wed, Apr 8, 2015 at 12:40 AM, Bipin Nag <bipin.nag@gmail.com> wrote:
>
>> Hi everyone,
>>
>> I am using Sqoop v2.0.0-SNAPSHOT. I am trying to import a table from
>> mssql and create a parquet file in local filesystem via kite-sdk. I get the
>> following error:
>>
>> Stack trace: java.lang.ArithmeticException: / by zero
>>     at
>> org.apache.sqoop.connector.jdbc.GenericJdbcPartitioner.partitionIntegerColumn(GenericJdbcPartitioner.java:317)
>>     at
>> org.apache.sqoop.connector.jdbc.GenericJdbcPartitioner.getPartitions(GenericJdbcPartitioner.java:86)
>>     at
>> org.apache.sqoop.connector.jdbc.GenericJdbcPartitioner.getPartitions(GenericJdbcPartitioner.java:38)
>>     at
>> org.apache.sqoop.job.mr.SqoopInputFormat.getSplits(SqoopInputFormat.java:74)
>>     at
>> org.apache.hadoop.mapred.JobClient.writeNewSplits(JobClient.java:1107)
>>     at org.apache.hadoop.mapred.JobClient.writeSplits(JobClient.java:1124)
>>     at org.apache.hadoop.mapred.JobClient.access$600(JobClient.java:178)
>>     at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:1023)
>>     at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:976)
>>     at java.security.AccessController.doPrivileged(Native Method)
>>     at javax.security.auth.Subject.doAs(Subject.java:415)
>>     at
>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1642)
>>     at
>> org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java:976)
>>     at org.apache.hadoop.mapreduce.Job.submit(Job.java:582)
>>     at
>> org.apache.sqoop.submission.mapreduce.MapreduceSubmissionEngine.submitToCluster(MapreduceSubmissionEngine.java:274)
>>     at
>> org.apache.sqoop.submission.mapreduce.MapreduceSubmissionEngine.submit(MapreduceSubmissionEngine.java:255)
>>     at org.apache.sqoop.driver.JobManager.start(JobManager.java:288)
>>     at
>> org.apache.sqoop.handler.JobRequestHandler.startJob(JobRequestHandler.java:380)
>>     at
>> org.apache.sqoop.handler.JobRequestHandler.handleEvent(JobRequestHandler.java:116)
>>     at
>> org.apache.sqoop.server.v1.JobServlet.handlePutRequest(JobServlet.java:96)
>>     at
>> org.apache.sqoop.server.SqoopProtocolServlet.doPut(SqoopProtocolServlet.java:79)
>>     at javax.servlet.http.HttpServlet.service(HttpServlet.java:646)
>>     at javax.servlet.http.HttpServlet.service(HttpServlet.java:723)
>>     at
>> org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:290)
>>     at
>> org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206)
>>     at
>> org.apache.hadoop.security.authentication.server.AuthenticationFilter.doFilter(AuthenticationFilter.java:592)
>>     at
>> org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticationFilter.doFilter(DelegationTokenAuthenticationFilter.java:277)
>>     at
>> org.apache.hadoop.security.authentication.server.AuthenticationFilter.doFilter(AuthenticationFilter.java:555)
>>     at
>> org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:235)
>>     at
>> org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206)
>>     at
>> org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:233)
>>     at
>> org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:191)
>>     at
>> org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:127)
>>     at
>> org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:103)
>>     at
>> org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:109)
>>     at
>> org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:293)
>>     at
>> org.apache.coyote.http11.Http11Processor.process(Http11Processor.java:861)
>>     at
>> org.apache.coyote.http11.Http11Protocol$Http11ConnectionHandler.process(Http11Protocol.java:606)
>>     at
>> org.apache.tomcat.util.net.JIoEndpoint$Worker.run(JIoEndpoint.java:489)
>>     at java.lang.Thread.run(Thread.java:745)
>>
>> Here is my job detail:
>>
>> Job with id 1 and name Customer import (Enabled: true, Created by bipin
>> at 8/4/15 12:03 PM, Updated by bipin at 8/4/15 12:54 PM)
>> Using link id 3 and Connector id 4
>>   From database configuration
>>     Schema name:
>>     Table name:
>>     Table SQL statement: SELECT * FROM MasterData.Customer WITH (NOLOCK)
>> WHERE CreationDate < '2012-01-01' AND ${CONDITIONS}
>>     Table column names:
>>     Partition column name: CustomerID
>>     Null value allowed for the partition column: true
>>     Boundary query:
>>   Incremental read
>>     Check column:
>>     Last value:
>>   Throttling resources
>>     Extractors: 1
>>     Loaders: 1
>>   To Kite Dataset Configuration
>>     Dataset URI: dataset:file:/home/bipin/data/Customer
>>     File format: PARQUET
>>
>> ​What am I doing wrong ?
>> Thanks
>>
>>
>

Mime
View raw message