hadoop-common-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Daryn Sharp (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (HADOOP-12861) RPC client fails too quickly when server connection limit is reached
Date Tue, 01 Mar 2016 19:57:18 GMT

    [ https://issues.apache.org/jira/browse/HADOOP-12861?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15174321#comment-15174321
] 

Daryn Sharp commented on HADOOP-12861:
--------------------------------------

Sure would be nice to have HADOOP-10940 integrated so I don't have to make a different patch...

> RPC client fails too quickly when server connection limit is reached
> --------------------------------------------------------------------
>
>                 Key: HADOOP-12861
>                 URL: https://issues.apache.org/jira/browse/HADOOP-12861
>             Project: Hadoop Common
>          Issue Type: Bug
>          Components: ipc
>    Affects Versions: 2.7.0
>            Reporter: Daryn Sharp
>            Assignee: Daryn Sharp
>
> The NN's rpc server immediately closes new client connections when a connection limit
is reached. The client rapidly retries a small number of times with no delay which causes
clients to fail quickly. If the connection is refused or timedout, the connection retry policy
tries with backoff. Clients should treat a reset connection as a connection failure so the
connection retry policy is used.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Mime
View raw message