mina-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Goldstein Lyor (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (SSHD-754) OOM in sending data for channel
Date Sun, 09 Jul 2017 15:25:02 GMT

    [ https://issues.apache.org/jira/browse/SSHD-754?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16079638#comment-16079638

Goldstein Lyor commented on SSHD-754:

If indeed you have a fix / improvement in mind, a pull-request is the best way to have it
evaluated and eventually merged...

> OOM in sending data for channel
> -------------------------------
>                 Key: SSHD-754
>                 URL: https://issues.apache.org/jira/browse/SSHD-754
>             Project: MINA SSHD
>          Issue Type: Bug
>    Affects Versions: 1.1.0
>            Reporter: Eugene Petrenko
> I have an implementation of SSHD server with the library. It sends gigabytes (e.g. 5GB)
of data as command output. 
> Starting from Putty plink 0.68 (also includes plink 0.69) we started to have OOM errors.
Checking memory dumps shown the most of the memory is consumed from the function
> org.apache.sshd.common.session.AbstractSession#writePacket(org.apache.sshd.common.util.buffer.Buffer)
> In the hprof I see thousands of PendingWriteFuture objects (btw, each holds a reference
to a logger instance). And those objects are only created from this function. 
> It is clear the session is running through rekey. I see the kexState indicating the progress.

> Is there a way to artificially limit the sending queue, no matter if related remote window
allows sending that enormous amount of data? As of my estimation, the window was reported
to be around 1.5 GB or more. Maybe, such huge window size was caused by an arithmetic overflow
that is fixed on SSHD-701

This message was sent by Atlassian JIRA

View raw message