hadoop-common-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Suresh Srinivas (JIRA)" <j...@apache.org>
Subject [jira] Resolved: (HADOOP-4802) RPC Server send buffer retains size of largest response ever sent
Date Mon, 04 Jan 2010 07:38:54 GMT

     [ https://issues.apache.org/jira/browse/HADOOP-4802?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel

Suresh Srinivas resolved HADOOP-4802.

       Resolution: Duplicate
    Fix Version/s: 0.22.0

This is duplicate of HADOOP-6460, which has been fixed. Marking this as resolved as well.

> RPC Server send buffer retains size of largest response ever sent 
> ------------------------------------------------------------------
>                 Key: HADOOP-4802
>                 URL: https://issues.apache.org/jira/browse/HADOOP-4802
>             Project: Hadoop Common
>          Issue Type: Bug
>          Components: ipc
>    Affects Versions: 0.18.2, 0.19.0
>            Reporter: stack
>             Fix For: 0.20.2, 0.21.0, 0.22.0
> The stack-based ByteArrayOutputStream in Server.Hander is reset each time through the
run loop.  This will set the BAOS 'size' back to zero but the allocated backing buffer is
unaltered.  If during an Handlers' lifecycle, any particular RPC response was fat -- Megabytes,
even -- the buffer expands during the write to accommodate the particular response but then
never shrinks subsequently.  If a hosting Server has had more than one 'fat payload' occurrence,
the resultant occupied heap can provoke memory woes (See https://issues.apache.org/jira/browse/HBASE-900?focusedCommentId=12654009#action_12654009
for an extreme example; occasional payloads of 20-50MB with 30 handlers robbed the heap of

This message is automatically generated by JIRA.
You can reply to this email to add a comment to the issue online.

View raw message