sqoop-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From GitBox <...@apache.org>
Subject [GitHub] [sqoop] SUMANG484 removed a comment on issue #25: Use mysql to import data from hdfs to the database when sqoop.mysql.e…
Date Mon, 18 Mar 2019 09:55:16 GMT
SUMANG484 removed a comment on issue #25: Use mysql to import data from hdfs to the database
when sqoop.mysql.e…
URL: https://github.com/apache/sqoop/pull/25#issuecomment-473772409
 
 
   I have been facing with this error 
   ./bin/sqoop-import --connect jdbc:mysql://127.0.0.1/demodb --username demo -P --table CUSTOMERS
--bindir /tmp/sqoop-hduser_/compile/ --m 1
   Warning: /usr/lib/sqoop/../hbase does not exist! HBase imports will fail.
   Please set $HBASE_HOME to the root of your HBase installation.
   Warning: /usr/lib/sqoop/../hcatalog does not exist! HCatalog jobs will fail.
   Please set $HCAT_HOME to the root of your HCatalog installation.
   Warning: /usr/lib/sqoop/../accumulo does not exist! Accumulo imports will fail.
   Please set $ACCUMULO_HOME to the root of your Accumulo installation.
   Warning: /usr/lib/sqoop/../zookeeper does not exist! Accumulo imports will fail.
   Please set $ZOOKEEPER_HOME to the root of your Zookeeper installation.
   19/03/18 01:20:22 INFO sqoop.Sqoop: Running Sqoop version: 1.4.6
   Enter password: 
   19/03/18 01:20:24 INFO manager.MySQLManager: Preparing to use a MySQL streaming resultset.
   19/03/18 01:20:24 INFO tool.CodeGenTool: Beginning code generation
   Mon Mar 18 01:20:24 EDT 2019 WARN: Establishing SSL connection without server's identity
verification is not recommended. According to MySQL 5.5.45+, 5.6.26+ and 5.7.6+ requirements
SSL connection must be established by default if explicit option isn't set. For compliance
with existing applications not using SSL the verifyServerCertificate property is set to 'false'.
You need either to explicitly disable SSL by setting useSSL=false, or set useSSL=true and
provide truststore for server certificate verification.
   19/03/18 01:20:24 INFO manager.SqlManager: Executing SQL statement: SELECT t.* FROM `CUSTOMERS`
AS t LIMIT 1
   19/03/18 01:20:24 INFO manager.SqlManager: Executing SQL statement: SELECT t.* FROM `CUSTOMERS`
AS t LIMIT 1
   19/03/18 01:20:24 INFO orm.CompilationManager: HADOOP_MAPRED_HOME is /usr/local/hadoop
   Note: /tmp/sqoop-hduser_/compile/CUSTOMERS.java uses or overrides a deprecated API.
   Note: Recompile with -Xlint:deprecation for details.
   19/03/18 01:20:26 INFO orm.CompilationManager: Writing jar file: /tmp/sqoop-hduser_/compile/CUSTOMERS.jar
   19/03/18 01:20:26 WARN manager.MySQLManager: It looks like you are importing from mysql.
   19/03/18 01:20:26 WARN manager.MySQLManager: This transfer can be faster! Use the --direct
   19/03/18 01:20:26 WARN manager.MySQLManager: option to exercise a MySQL-specific fast path.
   19/03/18 01:20:26 INFO manager.MySQLManager: Setting zero DATETIME behavior to convertToNull
(mysql)
   19/03/18 01:20:26 INFO mapreduce.ImportJobBase: Beginning import of CUSTOMERS
   19/03/18 01:20:26 INFO Configuration.deprecation: mapred.job.tracker is deprecated. Instead,
use mapreduce.jobtracker.address
   19/03/18 01:20:26 WARN util.NativeCodeLoader: Unable to load native-hadoop library for
your platform... using builtin-java classes where applicable
   19/03/18 01:20:27 INFO Configuration.deprecation: mapred.jar is deprecated. Instead, use
mapreduce.job.jar
   19/03/18 01:20:27 INFO Configuration.deprecation: mapred.job.tracker is deprecated. Instead,
use mapreduce.jobtracker.address
   19/03/18 01:20:27 INFO Configuration.deprecation: mapred.map.tasks is deprecated. Instead,
use mapreduce.job.maps
   19/03/18 01:20:27 INFO Configuration.deprecation: session.id is deprecated. Instead, use
dfs.metrics.session-id
   19/03/18 01:20:27 INFO jvm.JvmMetrics: Initializing JVM Metrics with processName=JobTracker,
sessionId=
   Mon Mar 18 01:20:28 EDT 2019 WARN: Establishing SSL connection without server's identity
verification is not recommended. According to MySQL 5.5.45+, 5.6.26+ and 5.7.6+ requirements
SSL connection must be established by default if explicit option isn't set. For compliance
with existing applications not using SSL the verifyServerCertificate property is set to 'false'.
You need either to explicitly disable SSL by setting useSSL=false, or set useSSL=true and
provide truststore for server certificate verification.
   19/03/18 01:20:28 INFO db.DBInputFormat: Using read commited transaction isolation
   19/03/18 01:20:28 INFO mapreduce.JobSubmitter: number of splits:1
   19/03/18 01:20:28 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_local285413982_0001
   19/03/18 01:20:29 INFO mapred.LocalDistributedCacheManager: Creating symlink: /app/hadoop/tmp/mapred/local/1552886428895/libjars
<- /usr/lib/sqoop/libjars/*
   19/03/18 01:20:29 WARN fs.FileUtil: Command 'ln -s /app/hadoop/tmp/mapred/local/1552886428895/libjars
/usr/lib/sqoop/libjars/*' failed 1 with: ln: failed to create symbolic link '/usr/lib/sqoop/libjars/*':
No such file or directory
   
   19/03/18 01:20:29 WARN mapred.LocalDistributedCacheManager: Failed to create symlink: /app/hadoop/tmp/mapred/local/1552886428895/libjars
<- /usr/lib/sqoop/libjars/*
   19/03/18 01:20:29 INFO mapred.LocalDistributedCacheManager: Localized file:/app/hadoop/tmp/mapred/staging/root285413982/.staging/job_local285413982_0001/libjars
as file:/app/hadoop/tmp/mapred/local/1552886428895/libjars
   19/03/18 01:20:29 INFO mapreduce.Job: The url to track the job: http://localhost:8080/
   19/03/18 01:20:29 INFO mapreduce.Job: Running job: job_local285413982_0001
   19/03/18 01:20:29 INFO mapred.LocalJobRunner: OutputCommitter set in config null
   19/03/18 01:20:29 INFO output.FileOutputCommitter: File Output Committer Algorithm version
is 1
   19/03/18 01:20:29 INFO output.FileOutputCommitter: FileOutputCommitter skip cleanup _temporary
folders under output directory:false, ignore cleanup failures: false
   19/03/18 01:20:29 INFO mapred.LocalJobRunner: OutputCommitter is org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter
   19/03/18 01:20:29 WARN mapred.LocalJobRunner: job_local285413982_0001
   org.apache.hadoop.security.AccessControlException: Permission denied: user=root, access=WRITE,
inode="/":hduser_:supergroup:drwxr-xr-x
   	at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:350)
   	at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:251)
   	at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:189)
   	at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1753)
   	at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1737)
   	at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkAncestorAccess(FSDirectory.java:1696)
   	at org.apache.hadoop.hdfs.server.namenode.FSDirMkdirOp.mkdirs(FSDirMkdirOp.java:60)
   	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:2990)
   	at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.mkdirs(NameNodeRpcServer.java:1096)
   	at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.mkdirs(ClientNamenodeProtocolServerSideTranslatorPB.java:652)
   	at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
   	at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:503)
   	at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:989)
   	at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:871)
   	at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:817)
   	at java.security.AccessController.doPrivileged(Native Method)
   	at javax.security.auth.Subject.doAs(Subject.java:422)
   	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1893)
   	at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2606)
   
   	at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
   	at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
   	at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
   	at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
   	at org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:121)
   	at org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:88)
   	at org.apache.hadoop.hdfs.DFSClient.primitiveMkdir(DFSClient.java:2474)
   	at org.apache.hadoop.hdfs.DFSClient.mkdirs(DFSClient.java:2447)
   	at org.apache.hadoop.hdfs.DistributedFileSystem$27.doCall(DistributedFileSystem.java:1248)
   	at org.apache.hadoop.hdfs.DistributedFileSystem$27.doCall(DistributedFileSystem.java:1245)
   	at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
   	at org.apache.hadoop.hdfs.DistributedFileSystem.mkdirsInternal(DistributedFileSystem.java:1245)
   	at org.apache.hadoop.hdfs.DistributedFileSystem.mkdirs(DistributedFileSystem.java:1237)
   	at org.apache.hadoop.fs.FileSystem.mkdirs(FileSystem.java:2216)
   	at org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter.setupJob(FileOutputCommitter.java:343)
   	at org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:540)
   Caused by: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.AccessControlException):
Permission denied: user=root, access=WRITE, inode="/":hduser_:supergroup:drwxr-xr-x
   	at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:350)
   	at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:251)
   	at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:189)
   	at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1753)
   	at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1737)
   	at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkAncestorAccess(FSDirectory.java:1696)
   	at org.apache.hadoop.hdfs.server.namenode.FSDirMkdirOp.mkdirs(FSDirMkdirOp.java:60)
   	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:2990)
   	at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.mkdirs(NameNodeRpcServer.java:1096)
   	at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.mkdirs(ClientNamenodeProtocolServerSideTranslatorPB.java:652)
   	at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
   	at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:503)
   	at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:989)
   	at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:871)
   	at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:817)
   	at java.security.AccessController.doPrivileged(Native Method)
   	at javax.security.auth.Subject.doAs(Subject.java:422)
   	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1893)
   	at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2606)
   
   	at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1507)
   	at org.apache.hadoop.ipc.Client.call(Client.java:1453)
   	at org.apache.hadoop.ipc.Client.call(Client.java:1363)
   	at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:227)
   	at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:116)
   	at com.sun.proxy.$Proxy10.mkdirs(Unknown Source)
   	at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.mkdirs(ClientNamenodeProtocolTranslatorPB.java:583)
   	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
   	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
   	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
   	at java.lang.reflect.Method.invoke(Method.java:498)
   	at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:422)
   	at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:165)
   	at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:157)
   	at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95)
   	at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:359)
   	at com.sun.proxy.$Proxy11.mkdirs(Unknown Source)
   	at org.apache.hadoop.hdfs.DFSClient.primitiveMkdir(DFSClient.java:2472)
   	... 9 more
   19/03/18 01:20:30 INFO mapreduce.Job: Job job_local285413982_0001 running in uber mode
: false
   19/03/18 01:20:30 INFO mapreduce.Job:  map 0% reduce 0%
   19/03/18 01:20:30 INFO mapreduce.Job: Job job_local285413982_0001 failed with state FAILED
due to: NA
   19/03/18 01:20:30 INFO mapreduce.Job: Counters: 0
   19/03/18 01:20:30 WARN mapreduce.Counters: Group FileSystemCounters is deprecated. Use
org.apache.hadoop.mapreduce.FileSystemCounter instead
   19/03/18 01:20:30 INFO mapreduce.ImportJobBase: Transferred 0 bytes in 2.5552 seconds (0
bytes/sec)
   19/03/18 01:20:30 WARN mapreduce.Counters: Group org.apache.hadoop.mapred.Task$Counter
is deprecated. Use org.apache.hadoop.mapreduce.TaskCounter instead
   19/03/18 01:20:30 INFO mapreduce.ImportJobBase: Retrieved 0 records.
   19/03/18 01:20:30 ERROR tool.ImportTool: Error during import: Import job failed!
   

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

Mime
View raw message