synapse-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Andreas Veithen <>
Subject Re: HTTP Core Performance and Reactor Buffer Size
Date Sun, 24 Nov 2013 12:02:23 GMT

While debugging this scenario (on Ubuntu with the default receive
buffer size of 8192 and a payload of 1M), I noticed something else.
Very early in the test execution, there are TCP retransmissions from
the client to Synapse. This is of course weird and should not happen.
While trying to understand why that occurs, I noticed that the TCP
window size advertised by Synapse to the client is initially 43690,
and then drops gradually to 8192. The latter value is expected because
it corresponds to the receive buffer size. The question is why the TCP
window is initially 43690.

It turns out that this is because httpcore-nio sets the receive buffer
size only on the sockets for new incoming connections (in
AbstractMultiworkerIOReactor#prepareSocket), but not on the server
socket itself [1]. Since the initial TCP window size is advertised in
the SYN/ACK packet before the connection is accepted (and httpcore-nio
gets a chance to set the receive buffer size), it will be the default
receive buffer size, not 8192.

To fix this, I modified httpcore-nio as follows:

Index: httpcore-nio/src/main/java/org/apache/http/impl/nio/reactor/
--- httpcore-nio/src/main/java/org/apache/http/impl/nio/reactor/
(revision 1544958)
+++ httpcore-nio/src/main/java/org/apache/http/impl/nio/reactor/
(working copy)
@@ -233,6 +233,9 @@
             try {
                 final ServerSocket socket = serverChannel.socket();
+                if (this.config.getRcvBufSize() > 0) {
+                    socket.setReceiveBufferSize(this.config.getRcvBufSize());
+                }
             } catch (final IOException ex) {

This fixes the TCP window and retransmission problem, and it also
appears to fix half of the overall issue: now transmitting the 1M
request payload only takes a few 100 milliseconds instead of 20
seconds. However, the issue still exists in the return path.



On Thu, Nov 21, 2013 at 9:08 PM, Hiranya Jayathilaka
<> wrote:
> Hi Devs,
> I just found out that the performance of the Synapse Pass Through transport
> is highly sensitive to the RcvBufferSize of the IO reactors (especially when
> mediating very large messages). Here are some test results. In this case,
> I'm simply passing through a 1M message through Synapse to a backend server,
> which simply echoes it back to the client. Notice how the execution time of
> the scenario varies with the RcvBufferSize of the IO reactors.
> RcvBufferSize (in bytes)                  Scenario Execution Time (in
> seconds)
> ========================================================
> 8192 (Synapse default)                    25.9
> 16384                                                   0.4
> 32768                                                   0.2
> Is this behavior normal? If so does it make sense to change the Synapse
> default buffer size to something larger (e.g. 16k)?
> Interestingly I see this difference in behavior on Linux only. I cannot see
> a significant change in behavior on Mac.
> Appreciate your thoughts on this.
> Thanks,
> Hiranya
> --
> Hiranya Jayathilaka
> Mayhem Lab/RACE Lab;
> Dept. of Computer Science, UCSB;
> E-mail:;  Mobile: +1 (805) 895-7443
> Blog:

To unsubscribe, e-mail:
For additional commands, e-mail:

View raw message