mina-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Goldstein Lyor (JIRA)" <j...@apache.org>
Subject [jira] [Updated] (SSHD-754) OOM in sending data for channel
Date Thu, 07 Sep 2017 05:47:00 GMT

     [ https://issues.apache.org/jira/browse/SSHD-754?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel

Goldstein Lyor updated SSHD-754:
    Labels: channel stream throttle  (was: )

> OOM in sending data for channel
> -------------------------------
>                 Key: SSHD-754
>                 URL: https://issues.apache.org/jira/browse/SSHD-754
>             Project: MINA SSHD
>          Issue Type: Bug
>    Affects Versions: 1.6.0
>            Reporter: Eugene Petrenko
>            Assignee: Goldstein Lyor
>              Labels: channel, stream, throttle
> I have an implementation of SSHD server with the library. It sends gigabytes (e.g. 5GB)
of data as command output. 
> Starting from Putty plink 0.68 (also includes plink 0.69) we started to have OOM errors.
Checking memory dumps shown the most of the memory is consumed from the function
> org.apache.sshd.common.session.AbstractSession#writePacket(org.apache.sshd.common.util.buffer.Buffer)
> In the hprof I see thousands of PendingWriteFuture objects (btw, each holds a reference
to a logger instance). And those objects are only created from this function. 
> It is clear the session is running through rekey. I see the kexState indicating the progress.

> Is there a way to artificially limit the sending queue, no matter if related remote window
allows sending that enormous amount of data? As of my estimation, the window was reported
to be around 1.5 GB or more. Maybe, such huge window size was caused by an arithmetic overflow
that is fixed on SSHD-701

This message was sent by Atlassian JIRA

View raw message