qpid-proton mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Alan Conway <acon...@redhat.com>
Subject Re: messenger credit concern
Date Thu, 28 Feb 2013 20:55:27 GMT
On 02/28/2013 03:11 PM, Rafael Schloming wrote:
> On Tue, Feb 26, 2013 at 12:18 PM, Ken Giusti <kgiusti@redhat.com> wrote:
[snip]
> As I mentioned above, I don't think recv should be thought of as a flow
> control thing. It does provide input into what messenger does for flow
> control, but it's really just about a way for the app to fetch messages,
> and so far I've been considering three scenarios:
>
>    (1) an app wants to receive and process messages indefinitely, in which
> case pn_messenger_recv(-1) now does that job pretty nicely
>    (2) an app wants to make a simple request/response, in which case it
> wants to receive exactly one message back and getting anymore would be a
> bug.
>    (3) a generalized form of option (2) where an app makes N requests and
> processes each one as they arrive back. In this case you have to do
> pn_messenger_recv(N - what you've already processed).
>
> I think these are probably the 3 most common scenarios, and I can see how
> using a pattern like (3) to cater to scenario (1) would be awkward, however
> I think it's less awkward when used in scenario (3).
>
> That said I'm open to trying to simplify the API here, but I fundamentally
> don't think of this as a wire level flow control API. I get the impression
> from the comments in this thread that there is an idea that the app
> developer somehow has more knowledge or is in a better position to
> distribute credit than the messenger implementation, whereas I think the
> opposite is true. In my experience, the nature/shape of incoming message
> traffic is not a variable that is well known at development time. Perhaps
> you can define the extreme bounds of what loads you want to be able to
> handle, but at runtime there are many unpredictable factors:
>
>    - your service can go from idle to extreme bursts with no warning
>    - round trip times can fluctuate based general network activity
>    - message processing times might vary due to other activity on
>      your machine or degradation in services you depend on
>    - your load might be unevenly distributed across different
> links/connections
>    - a buggy or malicious app might be (unintentionally) DoSing you
>
> All of these factors and more go into determining the optimal and/or fair
> credit allocation at any given point in time, and that means a robust flow
> control algorithm really needs to be dynamic in nature. Not only that, but
> a robust flow control algorithm is a huge part of the value that messaging
> infrastructure provides, and should really be a fundamentally separate
> concern from how apps logically process messages.
>
>
>> Since Messenger models a queue of incoming messages, I'd rather see flow
>> control configured as thresholds on that queue, and recv() not take any
>> arguments at all.
>>
>> Something like this:
>>
>>   Messenger m;
>>   ...
>>   m.set_flow_stop( 10000 )
>>   m.set_flow_resume( 9000 )
>>   ...
>>   for (;;) {
>>      m.recv()
>>      while (m.incoming())
>>      ....
>>
>> IMHO, this is a lot "cleaner" than the current approach.  Of course, some
>> may find my sample names too cryptic :)
>>
>
> I think limits the API to only the first scenario I described above. At
> least it's not clear to me how you'd fetch exactly N messages.
>
>
>>
>>  From an implementation point of view, the "flow stop" threshold is really
>> just a suggestion for how much credit should be distributed across the
>> links.  We could distribute more, as we would need to if the number of
>> links is greater than the flow stop threshold.  Or less, assume a point of
>> diminishing returns.
>>
>> Once the flow stop threshold is hit, credit would be drained from all
>> links.  No further credit would be granted until the number of "queued"
>> messages drops below "flow resume".
>>
>> This is the same model we use for queue flow control in the C++ broker.
>>
>
>   This is starting to mix two things: (1) how the application fetches
> messages from the messenger, and (2) how to tune the messengers internal
> flow control algorithm in the specific case that the application wants to
> receive messages indefinitely. I think (2) is premature given that we
> haven't really done any performance work yet. Ideally I'd say we don't want
> to have to tune it, rather just give it some bounds to work within, e.g.
> limit to no more than X megabytes or no more than Y messages.
>
> In any case I think we need to be clear on the application scenarios we're
> trying to support. I've given (3) common ones above. Are there cases that
> you think are missing, and do you have a better way to cater to the 3 I've
> mentioned?
>

I think this is a common case:

(1a) an app wants to receive and process messages indefinitely, but wants the 
implementation to use a bounded buffer of N messages or B bytes to do so. AKA 
"credit window" in AMQP 0.10 or "prefetch" in JMS.

I'm not familiar enough with Messenger yet to say whether that belongs in the 
messenger API or in some other configuration, but I think it needs to be a use 
case that is easy to set up. Agreed that ideally we would have a dynamic flow 
control algorithm that can figure out the optimal credit settings by itself, but 
until we do I suspect the simple "bounded buffer" model will cover most cases, 
and doesn't require exposing the complexity of the underlying flow control.



Mime
View raw message