hadoop-common-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Suresh Srinivas (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (HADOOP-9676) make maximum RPC buffer size configurable
Date Mon, 01 Jul 2013 22:23:20 GMT

    [ https://issues.apache.org/jira/browse/HADOOP-9676?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13697266#comment-13697266

Suresh Srinivas commented on HADOOP-9676:

+1 for the patch.
> make maximum RPC buffer size configurable
> -----------------------------------------
>                 Key: HADOOP-9676
>                 URL: https://issues.apache.org/jira/browse/HADOOP-9676
>             Project: Hadoop Common
>          Issue Type: Improvement
>    Affects Versions: 2.1.0-beta
>            Reporter: Colin Patrick McCabe
>            Assignee: Colin Patrick McCabe
>            Priority: Minor
>         Attachments: HADOOP-9676.001.patch, HADOOP-9676.003.patch
> Currently the RPC server just allocates however much memory the client asks for, without
validating.  It would be nice to make the maximum RPC buffer size configurable.  This would
prevent a rogue client from bringing down the NameNode (or other Hadoop daemon) with a few
requests for 2 GB buffers.  It would also make it easier to debug issues with super-large
RPCs or malformed headers, since OOMs can be difficult for developers to reproduce.

This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

View raw message