sqoop-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Richard (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (SQOOP-1617) MySQL fetch-size behavior changed with SQOOP-1400
Date Mon, 10 Nov 2014 07:59:34 GMT

    [ https://issues.apache.org/jira/browse/SQOOP-1617?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14204482#comment-14204482
] 

Richard commented on SQOOP-1617:
--------------------------------

I have tested mysql-connector-java-5.1.17.jar. It failed with the same message showed in SQOOP-1400.
{code}
14/07/24 10:44:48 INFO manager.SqlManager: Executing SQL statement: SELECT t.* FROM `test1`
AS t LIMIT 1
14/07/24 10:44:48 ERROR manager.SqlManager: Error reading from database: java.sql.SQLException:
Streaming result set com.mysql.jdbc.RowDataDynamic@1cfabc3a is still active. No statements
may be issued when any streaming result sets are open and in use on a given connection. Ensure
that you have called .close() on any active streaming result sets before attempting more queries.
java.sql.SQLException: Streaming result set com.mysql.jdbc.RowDataDynamic@1cfabc3a is still
active. No statements may be issued when any streaming result sets are open and in use on
a given connection. Ensure that you have called .close() on any active streaming result sets
before attempting more queries.
	at com.mysql.jdbc.SQLError.createSQLException(SQLError.java:934)
	at com.mysql.jdbc.SQLError.createSQLException(SQLError.java:931)
	at com.mysql.jdbc.MysqlIO.checkForOutstandingStreamingData(MysqlIO.java:2735)
	at com.mysql.jdbc.MysqlIO.sendCommand(MysqlIO.java:1899)
	at com.mysql.jdbc.MysqlIO.sqlQueryDirect(MysqlIO.java:2151)
	at com.mysql.jdbc.ConnectionImpl.execSQL(ConnectionImpl.java:2619)
	at com.mysql.jdbc.ConnectionImpl.execSQL(ConnectionImpl.java:2569)
	at com.mysql.jdbc.StatementImpl.executeQuery(StatementImpl.java:1521)
	at com.mysql.jdbc.ConnectionImpl.getMaxBytesPerChar(ConnectionImpl.java:3003)
	at com.mysql.jdbc.Field.getMaxBytesPerCharacter(Field.java:602)
	at com.mysql.jdbc.ResultSetMetaData.getPrecision(ResultSetMetaData.java:445)
	at org.apache.sqoop.manager.SqlManager.getColumnInfoForRawQuery(SqlManager.java:285)
	at org.apache.sqoop.manager.SqlManager.getColumnTypesForRawQuery(SqlManager.java:240)
	at org.apache.sqoop.manager.SqlManager.getColumnTypes(SqlManager.java:226)
	at org.apache.sqoop.manager.ConnManager.getColumnTypes(ConnManager.java:295)
	at org.apache.sqoop.orm.ClassWriter.getColumnTypes(ClassWriter.java:1773)
	at org.apache.sqoop.orm.ClassWriter.generate(ClassWriter.java:1578)
	at org.apache.sqoop.tool.CodeGenTool.generateORM(CodeGenTool.java:96)
	at org.apache.sqoop.tool.ImportTool.importTable(ImportTool.java:478)
	at org.apache.sqoop.tool.ImportTool.run(ImportTool.java:601)
	at org.apache.sqoop.Sqoop.run(Sqoop.java:143)
	at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
	at org.apache.sqoop.Sqoop.runSqoop(Sqoop.java:179)
	at org.apache.sqoop.Sqoop.runTool(Sqoop.java:218)
	at org.apache.sqoop.Sqoop.runTool(Sqoop.java:227)
	at org.apache.sqoop.Sqoop.main(Sqoop.java:236)
14/07/24 10:44:48 ERROR tool.ImportTool: Encountered IOException running import job: java.io.IOException:
No columns to generate for ClassWriter
	at org.apache.sqoop.orm.ClassWriter.generate(ClassWriter.java:1584)
	at org.apache.sqoop.tool.CodeGenTool.generateORM(CodeGenTool.java:96)
	at org.apache.sqoop.tool.ImportTool.importTable(ImportTool.java:478)
	at org.apache.sqoop.tool.ImportTool.run(ImportTool.java:601)
	at org.apache.sqoop.Sqoop.run(Sqoop.java:143)
	at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
	at org.apache.sqoop.Sqoop.runSqoop(Sqoop.java:179)
	at org.apache.sqoop.Sqoop.runTool(Sqoop.java:218)
	at org.apache.sqoop.Sqoop.runTool(Sqoop.java:227)
	at org.apache.sqoop.Sqoop.main(Sqoop.java:236)
{code}

> MySQL fetch-size behavior changed with SQOOP-1400
> -------------------------------------------------
>
>                 Key: SQOOP-1617
>                 URL: https://issues.apache.org/jira/browse/SQOOP-1617
>             Project: Sqoop
>          Issue Type: Bug
>          Components: connectors/mysql
>    Affects Versions: 1.4.6
>         Environment: CDH 5.2
> sqoop 1.4.5 (seems to include SQOOP-1400)
> mysql connector version 5.1.33
>            Reporter: J├╝rgen Thomann
>            Assignee: Jarek Jarcec Cecho
>            Priority: Minor
>             Fix For: 1.4.6
>
>         Attachments: SQOOP-1617.patch
>
>
> SQOOP-1400 changed the default behavior for the connector to load everything in memory.
The only working way to get the old streaming back is to use --fetch-size -2147483648 (Integer.MIN_VALUE)
> It would be nice if that could be changed and/or documented that mysql does not support
a fetch size and does only support row-by-row or loading everything in memory.
> The issue is discussed for example here:
> http://community.cloudera.com/t5/Data-Ingestion-Integration/Sqoop-GC-overhead-limit-exceeded-after-CDH5-2-update/td-p/20604



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Mime
View raw message