sqoop-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Hudson (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (SQOOP-3136) Sqoop should work well with not default file systems
Date Fri, 24 Feb 2017 07:17:44 GMT

    [ https://issues.apache.org/jira/browse/SQOOP-3136?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15882117#comment-15882117
] 

Hudson commented on SQOOP-3136:
-------------------------------

FAILURE: Integrated in Jenkins build Sqoop-hadoop200 #1096 (See [https://builds.apache.org/job/Sqoop-hadoop200/1096/])
SQOOP-3136: Add support to Sqoop being able to handle different file (maugli: [https://git-wip-us.apache.org/repos/asf?p=sqoop.git&a=commit&h=9466a0c7d9a585a94a472fc672cad87c30c8125b])
* (edit) src/java/org/apache/sqoop/mapreduce/DataDrivenImportJob.java
* (edit) src/java/org/apache/sqoop/mapreduce/JdbcUpdateExportJob.java
* (edit) src/java/org/apache/sqoop/mapreduce/MergeJob.java
* (add) src/java/org/apache/sqoop/util/FileSystemUtil.java
* (edit) src/java/org/apache/sqoop/hive/HiveImport.java
* (edit) src/java/org/apache/sqoop/mapreduce/CombineFileInputFormat.java
* (edit) src/java/org/apache/sqoop/io/SplittingOutputStream.java
* (edit) src/java/org/apache/sqoop/mapreduce/JdbcExportJob.java
* (edit) src/java/org/apache/sqoop/tool/ImportTool.java
* (edit) src/java/org/apache/sqoop/util/FileUploader.java
* (edit) src/java/org/apache/sqoop/hive/TableDefWriter.java
* (edit) src/java/org/apache/sqoop/mapreduce/HBaseBulkImportJob.java
* (edit) src/java/org/apache/sqoop/manager/oracle/OraOopUtilities.java
* (edit) src/java/org/apache/sqoop/lib/LargeObjectLoader.java
* (edit) src/java/org/apache/sqoop/io/LobReaderCache.java
* (edit) src/java/com/cloudera/sqoop/io/LobReaderCache.java
* (edit) src/java/org/apache/sqoop/mapreduce/ExportJobBase.java
* (add) src/test/org/apache/sqoop/util/TestFileSystemUtil.java


> Sqoop should work well with not default file systems
> ----------------------------------------------------
>
>                 Key: SQOOP-3136
>                 URL: https://issues.apache.org/jira/browse/SQOOP-3136
>             Project: Sqoop
>          Issue Type: Improvement
>          Components: connectors/hdfs
>    Affects Versions: 1.4.5
>            Reporter: Illya Yalovyy
>            Assignee: Illya Yalovyy
>         Attachments: SQOOP-3136.patch
>
>
> Currently Sqoop assumes default file system when it comes to IO operations. It makes
it hard to use other FileSystem implementations as source or destination. Here is an example:
> {code}
> sqoop import --connect <JDBC CONNECTION> --table table1 --driver <JDBC DRIVER>
--username root --password **** --delete-target-dir --target-dir s3a://some-bucket/tmp/sqoop
> ...
> 17/02/15 19:16:59 ERROR tool.ImportTool: Imported Failed: Wrong FS: s3a://some-bucket/tmp/sqoop,
expected: hdfs://<DNS>:8020
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

Mime
View raw message