thrift-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Jules Cisek <>
Subject Re: non-blocking servers are leaking sockets
Date Wed, 22 Jan 2014 20:09:53 GMT
this service actually needs to respond in under 100ms (and usually does in
less than 20) so a short delay is just not possible.

on the server, i see a lot of this in the logs:

14/01/22 19:15:27 WARN Thread-3 server.TThreadedSelectorServer: Got an
IOException in internalRead! Connection reset by peer
        at Method)

(note that these resets happen when the async client doesn't get a response
from the server in the time set using client.setTimeout(m) which in our
case can be quite often and we're ok with that)

i'm not sure why the thrift library feels it's necessary to log this stuff
since clients drop connections all the time and should be expected to and
frankly it makes me think that somehow this common error is not being
properly handled (although looking through the code it does look like
eventually the SocketChannel gets close()'ed)


On Mon, Jan 20, 2014 at 12:15 PM, Sammons, Mark <>wrote:

> Hi, Jules.
> I'm not sure my problems are completely analogous to yours, but I had a
> situation where a client program making many short calls to a remote
> thrift server was getting a "no route to host" exception after some number
> of calls, and it appeared to be due to slow release of closed sockets.  I
> found
> that adding a short (20ms) delay between calls resolved the problem.
> I realize this is not exactly a solution, but it has at least allowed me to
> keep working...
> Regards,
> Mark
> ________________________________________
> From: Jules Cisek []
> Sent: Monday, January 20, 2014 12:39 PM
> To:
> Subject: non-blocking servers are leaking sockets
> i'm running java TThreadedSelectorServer and THsHaServer based servers and
> both seem to be leaking sockets (thrift 0.9.0)
> googling around searching for answers i keep running into
> which puts the blame on
> the TCP config on the server while acknowledging that perhaps a problem in
> the application layer does exist (see last entry)
> i prefer not to mess with the TCP config on the machine because it is used
> for various tasks, also i did not have these issues with a
> TThreadPoolServer and a TSocket (blocking + TBufferedTransport) or any
> non-thrift server on the same machine.
> what happens is i get a bunch of TCP connections in a CLOSE_WAIT state and
> these remain in that state indefinitely.  but what is even more concerning,
> i get many sockets that don't show up in netstat at all and only lsof can
> show me that they exist.  on Linux lsof shows them as "can't identify
> protocol".  according to
> these
> sockets are in a "half closed state" and the linux kernel has no idea what
> to do with them.
> i'm pretty sure there's a problem with misbehaving clients, but the server
> should not fall leak resources because of a client side bug.
> my only recourse is to run a cronjob that looks at the lsof output and
> restarts the server whenever the socket count gets dangerously close to
> "too many open files" (8192 in my case)
> any ideas?
> --
> jules cisek |

jules cisek |

  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message