sqoop-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From alo alt <wget.n...@googlemail.com>
Subject Re: Sqoop and Teradata
Date Fri, 27 Jan 2012 09:34:34 GMT
try sqoop export -D sqoop.teradata.import.use.temporary.table=false …

that prevents you from creating temp table on TeraData

- Alex 

--
Alexander Lorenz
http://mapredit.blogspot.com

On Jan 26, 2012, at 9:55 PM, Bilung Lee wrote:

> The configuration property "sqoop.export.records.per.statement" can be used to configure
batch size in this case.
> 
> Have you tried Cloudera Connector for Teradata?  With the temporary tables used by the
connector, you may be able to get around the deadlock.
> 
> 
> On Tue, Jan 24, 2012 at 5:04 PM, Srinivas Surasani <vasajb@gmail.com> wrote:
> Hi All,
> 
> I'm working on Hadoop CDH3 U0 and  Sqoop CDH3 U2. 
> 
> I'm trying to export csv files from HDFS to Teradata, it works well with setting mapper
to "1" ( with batch loading of 1000 records at a time ). when I tried increasing the number
of mappers to more than one I'm getting the following error. Also, is it possible to configure
batch size at the time of export? 
> 
> 
>  sqoop export  --verbose --driver com.teradata.jdbc.TeraDriver --connect jdbc:teradata://xxxx/database=xxxx
 --username xxxxx --password xxxxx --table xxxx --export-dir /user/surasani/10minutes.txt
--fields-terminated-by '|' -m 4 --batch
> 
> 12/01/24 16:17:21 INFO mapred.JobClient:  map 3% reduce 0%
> 12/01/24 16:17:48 INFO mapred.JobClient: Task Id : attempt_201112211106_68553_m_000001_2,
Status : FAILED
> java.io.IOException: java.sql.BatchUpdateException: [Teradata Database] [TeraJDBC 13.00.00.07]
[Error 2631] [SQLState 40001] Transaction ABORTed due to DeadLock.
>         at com.cloudera.sqoop.mapreduce.AsyncSqlRecordWriter.write(AsyncSqlRecordWriter.java:223)
>         at com.cloudera.sqoop.mapreduce.AsyncSqlRecordWriter.write(AsyncSqlRecordWriter.java:49)
>         at org.apache.hadoop.mapred.MapTask$NewDirectOutputCollector.write(MapTask.java:530)
>         at org.apache.hadoop.mapreduce.TaskInputOutputContext.write(TaskInputOutputContext.java:80)
>         at com.cloudera.sqoop.mapreduce.TextExportMapper.map(TextExportMapper.java:82)
>         at com.cloudera.sqoop.mapreduce.TextExportMapper.map(TextExportMapper.java:40)
>         at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:144)
>         at com.cloudera.sqoop.mapreduce.AutoProgressMapper.run(AutoProgressMapper.java:189)
>         at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:646)
>         at org.apache.hadoop.mapred.MapTask.run(MapTask.java:322)
>         at org.apache.hadoop.mapred.Child$4.run(Child.java:268)
>         at java.security.AccessController.doPrivileged(Native Method)
>         at javax.security.auth.Subject.doAs(Subject.java:396)
>         at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1115)
>         at org.apache.hadoop.mapred.Child.main(Child.java:262)
> Caused by: java.sql.BatchUpdateException: [Teradata Database] [TeraJDBC 13.00.00.07]
[Error 2631] [SQLState 40001] Transaction ABORTed due to DeadLock.
>         at com.teradata.jdbc.jdbc_4.util.ErrorFactory.convertToBatchUpdateException(ErrorFactory.java:116)
>         at com.teradata.jdbc.jdbc_4.PreparedStatement.executeBatchDMLArray(PreparedStatement.java:149)
>         at com.teradata.jdbc.jdbc_3.ifjdbc_4.TeraLocalPreparedStatement.executeBatch(TeraLocalPreparedStatement.java:299)
>         at com.cloudera.sqoop.mapreduce.AsyncSqlOutputFormat$AsyncSqlExecThread.run(AsyncSqlOutputFormat.java:232)
> Caused by: com.teradata.jdbc.jdbc_4.util.JDBCException: [Teradata Database] [TeraJDBC
13.00.00.07] [Error 2631] [SQLState 40001] Transaction ABORTed due to DeadLock.
>         at com.teradata.jdbc.jdbc_4.util.ErrorFactory.makeDatabaseSQLException(ErrorFactory.java:288)
>         at com.teradata.jdbc.jdbc_4.statemachine.ReceiveInitSubState.action(ReceiveInitSubState.java:102)
>         at com.teradata.jdbc.jdbc_4.statemachine.StatementReceiveState.subStateMachine(StatementReceiveState.java:285)
>         at com.teradata.jdbc.jdbc_4.statemachine.StatementReceiveState.action(StatementReceiveState.java:176)
>         at com.teradata.jdbc.jdbc_4.statemachine.StatementController.runBody(StatementController.java:108)
>         at com.teradata.jdbc.jdbc_4.statemachine.PreparedBatchStatementController.run(PreparedBatchStatementController.java:59)
>         at com.teradata.jdbc.jdbc_4.Statement.executeStatement(Statement.java:331)
>         at com.teradata.jdbc.jdbc_4.PreparedStatement.executeBatchDMLArray(PreparedStatement.java:138)
>         ... 2 more
>  
> 12/01/24 16:17:49 INFO mapred.JobClient:  map 4% reduce 0%
> 12/01/24 16:17:52 INFO mapred.JobClient:  map 5% reduce 0%
> 12/01/24 16:21:16 INFO mapred.JobClient: Job complete: job_201112211106_68553
> 12/01/24 16:21:16 INFO mapred.JobClient: Counters: 8
> 12/01/24 16:21:16 INFO mapred.JobClient:   Job Counters
> 12/01/24 16:21:16 INFO mapred.JobClient:     SLOTS_MILLIS_MAPS=1709245
> 12/01/24 16:21:16 INFO mapred.JobClient:     Total time spent by all reduces waiting
after reserving slots (ms)=0
> 12/01/24 16:21:16 INFO mapred.JobClient:     Total time spent by all maps waiting after
reserving slots (ms)=0
> 12/01/24 16:21:16 INFO mapred.JobClient:     Rack-local map tasks=1
> 12/01/24 16:21:16 INFO mapred.JobClient:     Launched map tasks=6
> 12/01/24 16:21:16 INFO mapred.JobClient:     Data-local map tasks=3
> 12/01/24 16:21:16 INFO mapred.JobClient:     SLOTS_MILLIS_REDUCES=0
> 12/01/24 16:21:16 INFO mapred.JobClient:     Failed map tasks=1
> 12/01/24 16:21:16 INFO mapreduce.ExportJobBase: Transferred 0 bytes in 862.1506 seconds
(0 bytes/sec)
> 12/01/24 16:21:16 INFO mapreduce.ExportJobBase: Exported 0 records.
> 12/01/24 16:21:16 ERROR tool.ExportTool: Error during export: Export job failed!
> 
> 
> Thanks,
> Srinivas 
> 


Mime
View raw message