qpid-proton mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Rafael Schloming <...@alum.mit.edu>
Subject Re: Idle Timeout of a Connection
Date Wed, 01 Apr 2015 11:24:35 GMT
On Wed, Apr 1, 2015 at 6:00 AM, Dominic Evans <dominic.evans@uk.ibm.com>
wrote:

> 2.4.5 Idle Timeout Of A Connection
>
> "To avoid spurious timeouts, the value in idle-time-out SHOULD be half the
> peer's actual timeout threshold"
>
> So, to me, this means on the @open performative the client should flow
> (e.g.,) 30000 as the idleTimeOut it would like to negotiate, but should
> actually only enforce that data is received from the other end within 60000
> milliseconds before it closes the session+connection.
>
> However, if that is the case, then the code in proton-c (pn_tick_amqp in
> transport.c) and proton-j (#tick() in TransportImpl.java) would appear to
> be doing the wrong thing?
> Currently it *halves* the advertised remote_idle_timeout of the peer in
> order to determine what deadline to adhere to for sending empty keepalive
> frames to the remote end.
> Similarly it uses its local_idle_timeout as-is to determine if the remote
> end hasn't send data recently enough (closing the link with
> resource-limit-exceeded when the deadline elapses). This would seem to mean
> that empty frames are being sent twice as often as they need to be, and
> resource-limit-exceeded is being fired too soon.
>
> It seems to me that instead it should used remote_idle_timeout as-is for
> determining the deadline for sending data, and the local_idle_timeout
> specified by the client user should either be doubled when determining the
> deadline or halved before sending it in the @open frame.
>
> Thoughts?
>

I believe your interpretation is correct. I've certainly noticed idle
frames being sent significantly more often than I would have expected, but
I haven't had time to dig into the cause.

--Rafael

Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message