sqoop-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "@Sanjiv Singh" <sanjiv.is...@gmail.com>
Subject Re: OraOop : Sqoop Direct Oracle import failed with error "Error: java.io.IOException: SQLException in nextKeyValue"
Date Fri, 25 Sep 2015 07:14:49 GMT
Thanks Mario,

You saved my day ....with "-Doraoop.import.consistent.read=true" removed
from sqoop command ...that same error certainly disappeared. Thanks a lots
again.


Now , I am facing another error when I tried on bigger table.Error  which seems
related to shared pool memory configuration . Can you help understand the
issue and resolving the same.


*Error logs : *


2015-09-25 11:11:19,411 [myid:] - INFO  [*main:OraOopLog@103*] -
Initializing Oracle session with SQL : alter session disable parallel query
2015-09-25 11:11:19,412 [myid:] - INFO  [*main:OraOopLog@103*] -
Initializing Oracle session with SQL : alter session set
"_serial_direct_read"=true
2015-09-25 11:11:19,412 [myid:] - INFO  [*main:OraOopLog@103*] -
Initializing Oracle session with SQL : alter session set
tracefile_identifier=oraoop
2015-09-25 11:12:40,764 [myid:] - INFO  [*main:OraOopLog@103*] - The table
being imported by sqoop has -365544576 blocks that have been divided into
128452 chunks which will be processed in 10 splits. The chunks will be
allocated to the splits using the method : ROUNDROBIN
...
Caused by: java.sql.SQLException: ORA-04031: unable to allocate 928 bytes
of shared memory ("shared pool","unknown object","sga heap(2,1)","KGL
handle")
...
Caused by: java.sql.SQLException: ORA-04031: unable to allocate 40 bytes of
shared memory ("shared pool","SELECT /*+ NO_INDEX(t) */ SS...","sql
area","plspfg : qcspls")
...
Caused by: java.sql.SQLException: ORA-04031: unable to allocate 40 bytes of
shared memory ("shared pool","SELECT /*+ NO_INDEX(t) */ SS...","sql
area","plspfg : qcspls")
....

*Sample logs : *

Error: java.io.IOException: SQLException in nextKeyValue
    at
org.apache.sqoop.mapreduce.db.DBRecordReader.nextKeyValue(DBRecordReader.java:277)
    at
org.apache.sqoop.manager.oracle.OraOopDBRecordReader.nextKeyValue(OraOopDBRecordReader.java:351)
    at
org.apache.hadoop.mapred.MapTask$NewTrackingRecordReader.nextKeyValue(MapTask.java:553)
    at
org.apache.hadoop.mapreduce.task.MapContextImpl.nextKeyValue(MapContextImpl.java:80)
    at
org.apache.hadoop.mapreduce.lib.map.WrappedMapper$Context.nextKeyValue(WrappedMapper.java:91)
    at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:144)
    at
org.apache.sqoop.mapreduce.AutoProgressMapper.run(AutoProgressMapper.java:64)
    at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:784)
    at org.apache.hadoop.mapred.MapTask.run(MapTask.java:341)
    at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:163)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:415)
    at
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
    at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:158)
Caused by: java.sql.SQLException: ORA-04031: unable to allocate 928 bytes
of shared memory ("shared pool","unknown object","sga heap(2,1)","KGL
handle")

    at
oracle.jdbc.driver.SQLStateMapping.newSQLException(SQLStateMapping.java:70)
    at
oracle.jdbc.driver.DatabaseError.newSQLException(DatabaseError.java:133)
    at
oracle.jdbc.driver.DatabaseError.throwSqlException(DatabaseError.java:206)
    at oracle.jdbc.driver.T4CTTIoer.processError(T4CTTIoer.java:455)
    at oracle.jdbc.driver.T4CTTIoer.processError(T4CTTIoer.java:413)
    at oracle.jdbc.driver.T4C8Oall.receive(T4C8Oall.java:1034)
    at
oracle.jdbc.driver.T4CPreparedStatement.doOall8(T4CPreparedStatement.java:194)
    at
oracle.jdbc.driver.T4CPreparedStatement.executeForDescribe(T4CPreparedStatement.java:791)
    at
oracle.jdbc.driver.T4CPreparedStatement.executeMaybeDescribe(T4CPreparedStatement.java:866)
    at
oracle.jdbc.driver.OracleStatement.doExecuteWithTimeout(OracleStatement.java:1186)
    at
oracle.jdbc.driver.OraclePreparedStatement.executeInternal(OraclePreparedStatement.java:3387)
    at
oracle.jdbc.driver.OraclePreparedStatement.executeQuery(OraclePreparedStatement.java:3431)
    at
oracle.jdbc.driver.OraclePreparedStatementWrapper.executeQuery(OraclePreparedStatementWrapper.java:1491)
    at
org.apache.sqoop.mapreduce.db.DBRecordReader.executeQuery(DBRecordReader.java:111)
    at
org.apache.sqoop.manager.oracle.OraOopDBRecordReader.executeQuery(OraOopDBRecordReader.java:417)
    at
org.apache.sqoop.mapreduce.db.DBRecordReader.nextKeyValue(DBRecordReader.java:235)



2015-09-25 10:19:26,269 INFO [RMCommunicator Allocator]
org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: getResources()
for application_1442839036383_0062: ask=1 release= 0 newContainers=5
finishedContainers=6 resourcelimit=<memory:2048, vCores:-29> knownNMs=5
2015-09-25 10:19:26,269 INFO [RMCommunicator Allocator]
org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Received
completed container container_1442839036383_0062_01_000040
2015-09-25 10:19:26,269 INFO [IPC Server handler 24 on 55993]
org.apache.hadoop.mapred.TaskAttemptListenerImpl: Diagnostics report from
attempt_1442839036383_0062_m_000023_1: Error: java.io.IOException:
SQLException in nextKeyValue
       at
org.apache.sqoop.mapreduce.db.DBRecordReader.nextKeyValue(DBRecordReader.java:277)
     at
org.apache.sqoop.manager.oracle.OraOopDBRecordReader.nextKeyValue(OraOopDBRecordReader.java:351)
     at
org.apache.hadoop.mapred.MapTask$NewTrackingRecordReader.nextKeyValue(MapTask.java:553)
      at
org.apache.hadoop.mapreduce.task.MapContextImpl.nextKeyValue(MapContextImpl.java:80)
  at
org.apache.hadoop.mapreduce.lib.map.WrappedMapper$Context.nextKeyValue(WrappedMapper.java:91)
     at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:144)
      at
org.apache.sqoop.mapreduce.AutoProgressMapper.run(AutoProgressMapper.java:64)
 at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:784)
  at org.apache.hadoop.mapred.MapTask.run(MapTask.java:341)
    at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:163)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:415)
      at
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
  at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:158)
Caused by: java.sql.SQLException: ORA-04031: unable to allocate 40 bytes of
shared memory ("shared pool","SELECT /*+ NO_INDEX(t) */ SS...","sql
area","plspfg : qcspls")

       at
oracle.jdbc.driver.SQLStateMapping.newSQLException(SQLStateMapping.java:70)
    at
oracle.jdbc.driver.DatabaseError.newSQLException(DatabaseError.java:133)
    at
oracle.jdbc.driver.DatabaseError.throwSqlException(DatabaseError.java:206)
  at oracle.jdbc.driver.T4CTTIoer.processError(T4CTTIoer.java:455)
     at oracle.jdbc.driver.T4CTTIoer.processError(T4CTTIoer.java:413)
        at oracle.jdbc.driver.T4C8Oall.receive(T4C8Oall.java:1034)
 at
oracle.jdbc.driver.T4CPreparedStatement.doOall8(T4CPreparedStatement.java:194)
   at
oracle.jdbc.driver.T4CPreparedStatement.executeForDescribe(T4CPreparedStatement.java:791)
  at
oracle.jdbc.driver.T4CPreparedStatement.executeMaybeDescribe(T4CPreparedStatement.java:866)
       at
oracle.jdbc.driver.OracleStatement.doExecuteWithTimeout(OracleStatement.java:1186)
     at
oracle.jdbc.driver.OraclePreparedStatement.executeInternal(OraclePreparedStatement.java:3387)
        at
oracle.jdbc.driver.OraclePreparedStatement.executeQuery(OraclePreparedStatement.java:3431)
      at
oracle.jdbc.driver.OraclePreparedStatementWrapper.executeQuery(OraclePreparedStatementWrapper.java:1491)
      at
org.apache.sqoop.mapreduce.db.DBRecordReader.executeQuery(DBRecordReader.java:111)
    at
org.apache.sqoop.manager.oracle.OraOopDBRecordReader.executeQuery(OraOopDBRecordReader.java:417)
    at
org.apache.sqoop.mapreduce.db.DBRecordReader.nextKeyValue(DBRecordReader.java:235)
  ... 13 more

2015-09-25 10:19:26,269 INFO [RMCommunicator Allocator]
org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Received
completed container container_1442839036383_0062_01_000006
2015-09-25 10:19:26,269 INFO [AsyncDispatcher event handler]
org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: Diagnostics
report from attempt_1442839036383_0062_m_000013_2: Container killed by the
ApplicationMaster.
Container killed on request. Exit code is 143
Container exited with a non-zero exit code 143

2015-09-25 10:19:26,269 FATAL [IPC Server handler 27 on 55993]
org.apache.hadoop.mapred.TaskAttemptListenerImpl: Task:
attempt_1442839036383_0062_m_000006_3 - exited : java.io.IOException:
SQLException in nextKeyValue
      at
org.apache.sqoop.mapreduce.db.DBRecordReader.nextKeyValue(DBRecordReader.java:277)
    at
org.apache.sqoop.manager.oracle.OraOopDBRecordReader.nextKeyValue(OraOopDBRecordReader.java:351)
    at
org.apache.hadoop.mapred.MapTask$NewTrackingRecordReader.nextKeyValue(MapTask.java:553)
     at
org.apache.hadoop.mapreduce.task.MapContextImpl.nextKeyValue(MapContextImpl.java:80)
 at
org.apache.hadoop.mapreduce.lib.map.WrappedMapper$Context.nextKeyValue(WrappedMapper.java:91)
    at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:144)
     at
org.apache.sqoop.mapreduce.AutoProgressMapper.run(AutoProgressMapper.java:64)
        at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:784)
 at org.apache.hadoop.mapred.MapTask.run(MapTask.java:341)
   at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:163)
       at java.security.AccessController.doPrivileged(Native Method)
       at javax.security.auth.Subject.doAs(Subject.java:415)
     at
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
 at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:158)
Caused by: java.sql.SQLException: ORA-04031: unable to allocate 40 bytes of
shared memory ("shared pool","SELECT /*+ NO_INDEX(t) */ SS...","sql
area","plspfg : qcspls")

      at
oracle.jdbc.driver.SQLStateMapping.newSQLException(SQLStateMapping.java:70)
   at
oracle.jdbc.driver.DatabaseError.newSQLException(DatabaseError.java:133)
   at
oracle.jdbc.driver.DatabaseError.throwSqlException(DatabaseError.java:206)
 at oracle.jdbc.driver.T4CTTIoer.processError(T4CTTIoer.java:455)
    at oracle.jdbc.driver.T4CTTIoer.processError(T4CTTIoer.java:413)
       at oracle.jdbc.driver.T4C8Oall.receive(T4C8Oall.java:1034)
        at
oracle.jdbc.driver.T4CPreparedStatement.doOall8(T4CPreparedStatement.java:194)
  at
oracle.jdbc.driver.T4CPreparedStatement.executeForDescribe(T4CPreparedStatement.java:791)
 at
oracle.jdbc.driver.T4CPreparedStatement.executeMaybeDescribe(T4CPreparedStatement.java:866)
      at
oracle.jdbc.driver.OracleStatement.doExecuteWithTimeout(OracleStatement.java:1186)
    at
oracle.jdbc.driver.OraclePreparedStatement.executeInternal(OraclePreparedStatement.java:3387)
       at
oracle.jdbc.driver.OraclePreparedStatement.executeQuery(OraclePreparedStatement.java:3431)
     at
oracle.jdbc.driver.OraclePreparedStatementWrapper.executeQuery(OraclePreparedStatementWrapper.java:1491)
     at
org.apache.sqoop.mapreduce.db.DBRecordReader.executeQuery(DBRecordReader.java:111)
   at
org.apache.sqoop.manager.oracle.OraOopDBRecordReader.executeQuery(OraOopDBRecordReader.java:417)
   at
org.apache.sqoop.mapreduce.db.DBRecordReader.nextKeyValue(DBRecordReader.java:235)
 ... 13 more

2015-09-25 10:19:26,269 INFO [AsyncDispatcher event handler]
org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: Diagnostics
report from attempt_1442839036383_0062_m_000023_1: Error:
java.io.IOException: SQLException in nextKeyValue
        at
org.apache.sqoop.mapreduce.db.DBRecordReader.nextKeyValue(DBRecordReader.java:277)
      at
org.apache.sqoop.manager.oracle.OraOopDBRecordReader.nextKeyValue(OraOopDBRecordReader.java:351)
      at
org.apache.hadoop.mapred.MapTask$NewTrackingRecordReader.nextKeyValue(MapTask.java:553)
       at
org.apache.hadoop.mapreduce.task.MapContextImpl.nextKeyValue(MapContextImpl.java:80)
   at
org.apache.hadoop.mapreduce.lib.map.WrappedMapper$Context.nextKeyValue(WrappedMapper.java:91)
      at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:144)
       at
org.apache.sqoop.mapreduce.AutoProgressMapper.run(AutoProgressMapper.java:64)
  at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:784)
   at org.apache.hadoop.mapred.MapTask.run(MapTask.java:341)
     at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:163)
 at java.security.AccessController.doPrivileged(Native Method)
 at javax.security.auth.Subject.doAs(Subject.java:415)
       at
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
   at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:158)
Caused by: java.sql.SQLException: ORA-04031: unable to allocate 40 bytes of
shared memory ("shared pool","SELECT /*+ NO_INDEX(t) */ SS...","sql
area","plspfg : qcspls")

        at
oracle.jdbc.driver.SQLStateMapping.newSQLException(SQLStateMapping.java:70)
     at
oracle.jdbc.driver.DatabaseError.newSQLException(DatabaseError.java:133)
     at
oracle.jdbc.driver.DatabaseError.throwSqlException(DatabaseError.java:206)
   at oracle.jdbc.driver.T4CTTIoer.processError(T4CTTIoer.java:455)
      at oracle.jdbc.driver.T4CTTIoer.processError(T4CTTIoer.java:413)
 at oracle.jdbc.driver.T4C8Oall.receive(T4C8Oall.java:1034)
  at
oracle.jdbc.driver.T4CPreparedStatement.doOall8(T4CPreparedStatement.java:194)
    at
oracle.jdbc.driver.T4CPreparedStatement.executeForDescribe(T4CPreparedStatement.java:791)
   at
oracle.jdbc.driver.T4CPreparedStatement.executeMaybeDescribe(T4CPreparedStatement.java:866)
        at
oracle.jdbc.driver.OracleStatement.doExecuteWithTimeout(OracleStatement.java:1186)
      at
oracle.jdbc.driver.OraclePreparedStatement.executeInternal(OraclePreparedStatement.java:3387)
 at
oracle.jdbc.driver.OraclePreparedStatement.executeQuery(OraclePreparedStatement.java:3431)
       at
oracle.jdbc.driver.OraclePreparedStatementWrapper.executeQuery(OraclePreparedStatementWrapper.java:1491)
       at
org.apache.sqoop.mapreduce.db.DBRecordReader.executeQuery(DBRecordReader.java:111)
     at
org.apache.sqoop.manager.oracle.OraOopDBRecordReader.executeQuery(OraOopDBRecordReader.java:417)
     at
org.apache.sqoop.mapreduce.db.DBRecordReader.nextKeyValue(DBRecordReader.java:235)
   ... 13 more

2015-09-25 10:19:26,270 INFO [IPC Server handler 27 on 55993]
org.apache.hadoop.mapred.TaskAttemptListenerImpl: Diagnostics report from
attempt_1442839036383_0062_m_000006_3: Error: java.io.IOException:
SQLException in nextKeyValue
  at
org.apache.sqoop.mapreduce.db.DBRecordReader.nextKeyValue(DBRecordReader.java:277)
        at
org.apache.sqoop.manager.oracle.OraOopDBRecordReader.nextKeyValue(OraOopDBRecordReader.java:351)
        at
org.apache.hadoop.mapred.MapTask$NewTrackingRecordReader.nextKeyValue(MapTask.java:553)
 at
org.apache.hadoop.mapreduce.task.MapContextImpl.nextKeyValue(MapContextImpl.java:80)
     at
org.apache.hadoop.mapreduce.lib.map.WrappedMapper$Context.nextKeyValue(WrappedMapper.java:91)
        at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:144)
 at
org.apache.sqoop.mapreduce.AutoProgressMapper.run(AutoProgressMapper.java:64)
    at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:784)
     at org.apache.hadoop.mapred.MapTask.run(MapTask.java:341)
       at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:163)
   at java.security.AccessController.doPrivileged(Native Method)
   at javax.security.auth.Subject.doAs(Subject.java:415)
 at
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
     at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:158)
Caused by: java.sql.SQLException: ORA-04031: unable to allocate 40 bytes of
shared memory ("shared pool","SELECT /*+ NO_INDEX(t) */ SS...","sql
area","plspfg : qcspls")

  at
oracle.jdbc.driver.SQLStateMapping.newSQLException(SQLStateMapping.java:70)
       at
oracle.jdbc.driver.DatabaseError.newSQLException(DatabaseError.java:133)
       at
oracle.jdbc.driver.DatabaseError.throwSqlException(DatabaseError.java:206)
     at oracle.jdbc.driver.T4CTTIoer.processError(T4CTTIoer.java:455)
        at oracle.jdbc.driver.T4CTTIoer.processError(T4CTTIoer.java:413)
   at oracle.jdbc.driver.T4C8Oall.receive(T4C8Oall.java:1034)
    at
oracle.jdbc.driver.T4CPreparedStatement.doOall8(T4CPreparedStatement.java:194)
      at
oracle.jdbc.driver.T4CPreparedStatement.executeForDescribe(T4CPreparedStatement.java:791)
     at
oracle.jdbc.driver.T4CPreparedStatement.executeMaybeDescribe(T4CPreparedStatement.java:866)
  at
oracle.jdbc.driver.OracleStatement.doExecuteWithTimeout(OracleStatement.java:1186)
        at
oracle.jdbc.driver.OraclePreparedStatement.executeInternal(OraclePreparedStatement.java:3387)
   at
oracle.jdbc.driver.OraclePreparedStatement.executeQuery(OraclePreparedStatement.java:3431)
 at
oracle.jdbc.driver.OraclePreparedStatementWrapper.executeQuery(OraclePreparedStatementWrapper.java:1491)
 at
org.apache.sqoop.mapreduce.db.DBRecordReader.executeQuery(DBRecordReader.java:111)
       at
org.apache.sqoop.manager.oracle.OraOopDBRecordReader.executeQuery(OraOopDBRecordReader.java:417)
       at
org.apache.sqoop.mapreduce.db.DBRecordReader.nextKeyValue(DBRecordReader.java:235)
     ... 13 more

2015-09-25 10:19:26,269 INFO [RMCommunicator Allocator]
org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Received
completed container container_1442839036383_0062_01_000039
2015-09-25 10:19:26,270 INFO [AsyncDispatcher event handler]
org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: Diagnostics
report from attempt_1442839036383_0062_m_000004_0: Container killed by the
ApplicationMaster.
Container killed on request. Exit code is 143
Container exited with a non-zero exit code 143







Regards
Sanjiv Singh
Mob :  +091 9990-447-339

On Fri, Sep 25, 2015 at 12:12 PM, Mario Amatucci <mamatucci@gmail.com>
wrote:

> Hi Sanjiv,
> when you use
>
> -Doraoop.import.consistent.read=true \
>
> from
> https://sqoop.apache.org/docs/1.4.5/SqoopUserGuide.html
>
> Set to true to ensure all mappers read from the same point in time.
> The System Change Number (SCN) is passed down to all mappers, which
> use the Oracle Flashback Query to query the table as at that SCN.
>
> =>
>
> your oracle needs to save that 'snapshot' of all the data somewhere
> but at the same time it needs keep data changing as new dml comes on
> the table...
>
> you need that space available
> contact your oracle admin for details
>
> Best, Mario
> Kind Regards,
> Mario Amatucci
>
>
> On 25 September 2015 at 08:32, @Sanjiv Singh <sanjiv.is.on@gmail.com>
> wrote:
> > Hi Mario,
> >
> > Thanks for the reply.
> >
> > Please help me understand "maybe is the case you have not enough undo
> space
> > on oracle".
> >
> > Are you talking about disk or memory configured ? help me verify the
> same on
> > Oracle.
> >
> > Any help is highly appreciated !!!
> >
> >
> >
> > Regards,
> > Sanjiv Singh
> >
> >
> > Regards
> > Sanjiv Singh
> > Mob :  +091 9990-447-339
> >
> > On Fri, Sep 25, 2015 at 11:57 AM, Mario Amatucci <mamatucci@gmail.com>
> > wrote:
> >>
> >> Hi Sanjiv,
> >> maybe is the case you have not enough undo space on oracle; I saw that
> >> error on my case when loading data. Can you try with just 1 (smallest)
> >> partition?
> >> Kind Regards,
> >> Mario Amatucci
> >>
> >>
> >> On 25 September 2015 at 06:23, @Sanjiv Singh <sanjiv.is.on@gmail.com>
> >> wrote:
> >> > Hi David,
> >> >
> >> > PFA for log file with "—verbose" added to sqoop command.
> >> >
> >> >
> >> > Sqoop version: 1.4.5
> >> > hadoop-2.6.0
> >> >
> >> > Let me know if need other details.
> >> >
> >> >
> >> >
> >> > Regards
> >> > Sanjiv Singh
> >> > Mob :  +091 9990-447-339
> >> >
> >> > On Fri, Sep 25, 2015 at 6:04 AM, David Robson
> >> > <David.Robson@software.dell.com> wrote:
> >> >>
> >> >> Hi Sanjiv,
> >> >>
> >> >>
> >> >>
> >> >> Could you please run the failing command again and add “—verbose”
to
> >> >> generate debug logging and post the full log file?
> >> >>
> >> >>
> >> >>
> >> >> David
> >> >>
> >> >>
> >> >>
> >> >> From: @Sanjiv Singh [mailto:sanjiv.is.on@gmail.com]
> >> >> Sent: Thursday, 24 September 2015 10:10 PM
> >> >> To: user@sqoop.apache.org
> >> >> Cc: Sanjiv Singh
> >> >> Subject: OraOop : Sqoop Direct Oracle import failed with error
> "Error:
> >> >> java.io.IOException: SQLException in nextKeyValue"
> >> >>
> >> >>
> >> >>
> >> >> Hi Folks,
> >> >>
> >> >>
> >> >>
> >> >> I am trying to import partitioned Oracle table through "OraOop" -
> >> >> direct
> >> >> mode to Hive and getting error.
> >> >>
> >> >>  I tried with other permutation and combination of sqoop parameters,
> >> >> here
> >> >> is what i have tried.
> >> >>
> >> >> Worked (chunk.method=PARTITION and only 1 mapper):
> >> >>
> >> >>
> >> >>
> >> >>
> -Doraoop.import.partitions='OLD_DAYS,SYS_P41,SYS_P42,SYS,SYS_P68,SYS_P69,SYS_P70,SYS_P71'
> >> >> \
> >> >> -Doraoop.chunk.method=PARTITION  \
> >> >> --m 1  \
> >> >> --direct \
> >> >>
> >> >> Worked (chunk.method=PARTITION  removed and 100 mappers):
> >> >>
> >> >>
> >> >>
> >> >>
> -Doraoop.import.partitions='OLD_DAYS,SYS_P41,SYS_P42,SYS,SYS_P68,SYS_P69,SYS_P70,SYS_P71'
> >> >> \
> >> >> --m 100  \
> >> >> --direct \
> >> >>
> >> >> Doesn't work (chunk.method=PARTITION and  100 mappers):
> >> >>
> >> >>
> >> >>
> >> >>
> -Doraoop.import.partitions='OLD_DAYS,SYS_P41,SYS_P42,SYS,SYS_P68,SYS_P69,SYS_P70,SYS_P71'
> >> >> \
> >> >> -Doraoop.chunk.method=PARTITION  \
> >> >> --m 100  \
> >> >> --direct \
> >> >>
> >> >> Through other combination are working , Can you please help me
> >> >> understand
> >> >> why chunk.method=PARTITION with multiple mappers failing.   ?
> >> >>
> >> >> Is there something need to be done on hive for partition ?
> >> >>
> >> >> please help me in resolving the issue ?
> >> >>
> >> >> Any help is highly appreciated
> >> >>
> >> >>
> >> >>
> >> >>
> >> >> See below full sqoop command which is failing. and error logs.
> >> >>
> >> >> Sqoop Import command (which is failing):
> >> >>
> >> >> $SQOOP_HOME/bin/sqoop import  \
> >> >> -Doraoop.disabled=false \
> >> >>
> >> >>
> >> >>
> -Doraoop.import.partitions='OLD_DAYS,SYS_P41,SYS_P42,SYS,SYS_P68,SYS_P69,SYS_P70,SYS_P71'
> >> >> \
> >> >> -Doraoop.chunk.method=PARTITION  \
> >> >> -Doraoop.import.consistent.read=true \
> >> >>
> -Dmapred.child.java.opts="-Djava.security.egd=file:/dev/../dev/urandom"
> >> >> \
> >> >> --connect jdbc:oracle:thin:@host:port/db \
> >> >> --username ***** \
> >> >> --password ***** \
> >> >> --table DATE_DATA \
> >> >> --direct \
> >> >> --hive-import \
> >> >> --hive-table tempDB.DATE_DATA \
> >> >> --split-by D_DATE_SK \
> >> >> --m 100  \
> >> >> --delete-target-dir \
> >> >> --target-dir /tmp/34/DATE_DATA
> >> >>
> >> >>
> >> >> Error logs :
> >> >>
> >> >> 2015-09-24 16:23:57,068 [myid:] - INFO  [main:Job@1452] - Task Id :
> >> >> attempt_1442839036383_0051_m_000006_0, Status : FAILED
> >> >> Error: java.io.IOException: SQLException in nextKeyValue
> >> >>     at
> >> >>
> >> >>
> org.apache.sqoop.mapreduce.db.DBRecordReader.nextKeyValue(DBRecordReader.java:277)
> >> >>     at
> >> >>
> >> >>
> org.apache.sqoop.manager.oracle.OraOopDBRecordReader.nextKeyValue(OraOopDBRecordReader.java:351)
> >> >>     at
> >> >>
> >> >>
> org.apache.hadoop.mapred.MapTask$NewTrackingRecordReader.nextKeyValue(MapTask.java:553)
> >> >>     at
> >> >>
> >> >>
> org.apache.hadoop.mapreduce.task.MapContextImpl.nextKeyValue(MapContextImpl.java:80)
> >> >>     at
> >> >>
> >> >>
> org.apache.hadoop.mapreduce.lib.map.WrappedMapper$Context.nextKeyValue(WrappedMapper.java:91)
> >> >>     at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:144)
> >> >>     at
> >> >>
> >> >>
> org.apache.sqoop.mapreduce.AutoProgressMapper.run(AutoProgressMapper.java:64)
> >> >>     at
> org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:784)
> >> >>     at org.apache.hadoop.mapred.MapTask.run(MapTask.java:341)
> >> >>     at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:163)
> >> >>     at java.security.AccessController.doPrivileged(Native Method)
> >> >>     at javax.security.auth.Subject.doAs(Subject.java:415)
> >> >>     at
> >> >>
> >> >>
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
> >> >>     at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:158)
> >> >> Caused by: java.sql.SQLSyntaxErrorException: ORA-00933: SQL command
> not
> >> >> properly ended
> >> >>
> >> >>     at
> >> >>
> >> >>
> oracle.jdbc.driver.SQLStateMapping.newSQLException(SQLStateMapping.java:91)
> >> >>     at
> >> >>
> >> >>
> oracle.jdbc.driver.DatabaseError.newSQLException(DatabaseError.java:133)
> >> >>     at
> >> >>
> >> >>
> oracle.jdbc.driver.DatabaseError.throwSqlException(DatabaseError.java:206)
> >> >>     at oracle.jdbc.driver.T4CTTIoer.processError(T4CTTIoer.java:455)
> >> >>     at oracle.jdbc.driver.T4CTTIoer.processError(T4CTTIoer.java:413)
> >> >>     at oracle.jdbc.driver.T4C8Oall.receive(T4C8Oall.java:1034)
> >> >>     at
> >> >>
> >> >>
> oracle.jdbc.driver.T4CPreparedStatement.doOall8(T4CPreparedStatement.java:194)
> >> >>     at
> >> >>
> >> >>
> oracle.jdbc.driver.T4CPreparedStatement.executeForDescribe(T4CPreparedStatement.java:791)
> >> >>     at
> >> >>
> >> >>
> oracle.jdbc.driver.T4CPreparedStatement.executeMaybeDescribe(T4CPreparedStatement.java:866)
> >> >>     at
> >> >>
> >> >>
> oracle.jdbc.driver.OracleStatement.doExecuteWithTimeout(OracleStatement.java:1186)
> >> >>     at
> >> >>
> >> >>
> oracle.jdbc.driver.OraclePreparedStatement.executeInternal(OraclePreparedStatement.java:3387)
> >> >>     at
> >> >>
> >> >>
> oracle.jdbc.driver.OraclePreparedStatement.executeQuery(OraclePreparedStatement.java:3431)
> >> >>     at
> >> >>
> >> >>
> oracle.jdbc.driver.OraclePreparedStatementWrapper.executeQuery(OraclePreparedStatementWrapper.java:1491)
> >> >>     at
> >> >>
> >> >>
> org.apache.sqoop.mapreduce.db.DBRecordReader.executeQuery(DBRecordReader.java:111)
> >> >>     at
> >> >>
> >> >>
> org.apache.sqoop.manager.oracle.OraOopDBRecordReader.executeQuery(OraOopDBRecordReader.java:417)
> >> >>     at
> >> >>
> >> >>
> org.apache.sqoop.mapreduce.db.DBRecordReader.nextKeyValue(DBRecordReader.java:235)
> >> >>     ... 13 more
> >> >>
> >> >>
> >> >>
> >> >>
> >> >> Regards
> >> >> Sanjiv Singh
> >> >> Mob :  +091 9990-447-339
> >> >
> >> >
> >
> >
>

Mime
View raw message