hadoop-common-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Lei (Eddy) Xu (JIRA)" <j...@apache.org>
Subject [jira] [Updated] (HADOOP-11697) Use larger value for fs.s3a.connection.timeout.
Date Mon, 09 Mar 2015 23:08:38 GMT

     [ https://issues.apache.org/jira/browse/HADOOP-11697?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]

Lei (Eddy) Xu updated HADOOP-11697:
-----------------------------------
    Attachment: HADOOP-11697.001.patch

Reverted the patch to not change time unit of socket connection timeout in AWS SDK. 



> Use larger value for fs.s3a.connection.timeout.
> -----------------------------------------------
>
>                 Key: HADOOP-11697
>                 URL: https://issues.apache.org/jira/browse/HADOOP-11697
>             Project: Hadoop Common
>          Issue Type: Improvement
>    Affects Versions: 2.6.0
>            Reporter: Lei (Eddy) Xu
>            Assignee: Lei (Eddy) Xu
>            Priority: Minor
>              Labels: s3
>         Attachments: HADOOP-11697.001.patch, HDFS-7908.000.patch
>
>
> The default value of {{fs.s3a.connection.timeout}} is {{50000}} milliseconds. It causes
many {{SocketTimeoutException}} when uploading large files using {{hadoop fs -put}}. 
> Also, the units for {{fs.s3a.connection.timeout}} and {{fs.s3a.connection.estaablish.timeout}}
are milliseconds. For s3 connections, I think it is not necessary to have sub-seconds timeout
value. Thus I suggest to change the time unit to seconds, to easy sys admin's job.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Mime
View raw message