qpid-proton mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Rafael Schloming <...@alum.mit.edu>
Subject Re: Observations on the performance of the proton event model
Date Fri, 12 Dec 2014 15:21:04 GMT
On Thu, Dec 11, 2014 at 6:34 PM, Ken Giusti <kgiusti@redhat.com> wrote:
>
>
>
> ----- Original Message -----
> > From: "Rafael Schloming" <rhs@alum.mit.edu>
> > To: proton@qpid.apache.org
> > Sent: Wednesday, December 10, 2014 5:41:20 PM
> > Subject: Re: Observations on the performance of the proton event model
> >
> > On Wed, Dec 10, 2014 at 11:23 AM, Ken Giusti <kgiusti@redhat.com> wrote:
> >
> > > Hi,
> > >
> > > I've been working on a simple patch to qpidd that ports the AMQP 1.0
> > > module to the new event interface provided by proton 0.8.  See
> > > https://issues.apache.org/jira/browse/QPID-6255 for the patch.
> > >
> > > With the above patch, I've noticed a pretty consistent small drop in
> > > overall qpidd performance as gauged by qpid-cpp-benchmark (see
> comments in
> > > above jira).  Turns out the event loop is doing a lot more work when
> > > compared to the polled approach for the same message load.
> > >
> > > Digging around a bit, there are a couple of issues that result in qpidd
> > > doing a lot of unnecessary work:
> > >
> > > 1) the PN_TRANSPORT event isn't needed by this driver - pending output
> is
> > > manually checked at a later point.  In my test, approximately 25% of
> the
> > > total events are PN_TRANSPORT events, which the driver simply discards
> > >
> > > 2) A more serious issue - I get a PN_LINK_FLOW event for _every_
> message
> > > transfer!  Turns out that PN_LINK_FLOW is being issued for two
> different
> > > events (IMHO): when a flow frame is received (yay) and each time a
> transfer
> > > is done and credit is consumed (ugh).
> > >
> > > Item #2 seems like a bug - these two events have different semantic
> > > meaning and would likely result in different processing paths in the
> driver
> > > (in the case of qpidd, the credit consumed case would be ignored).
> > >
> >
> > It's not a bug, it was added intentionally since there are circumstances
> > where you get stalls if you don't have it. Each time a message is sent
> you
> > are actually updating the credit values on the link, and so if you don't
> > generate the event at that point, you need some fairly complex/subtle
> code
> > in your handlers to compensate. (You may actually have a bug yourself now
> > depending on how you've integrated the engine.)
> >
>
> Yes, I can see your point - processing a series of "credit added"/"credit
> removed" events doesn't really make sense.  In the end you're really just
> concerned with the credit level at the point where you're servicing the
> link.
>
>
> > The event shouldn't be generated for every message though, just once for
> > each batch of messages. In other words if I stuff a bunch of messages
> into
> > the engine at the same time, there should be only one flow event produced
> > once the engine has finished writing messages to the wire. If you're
> > actually observing one flow per message then either there is a bug in the
> > logic that elides duplicate events, or you're stuffing one message into
> the
> > engine at a time.
> >
>
> Hmmm... AFAIKT, it would seem that the existing broker code iterates over
> each consuming link, and issues one message per link.  Total conjecture
> here, but I thought that was desired, esp. in the case of multiple
> consumers on a shared queue.  Gordon could possibly shed some light on
> this.  In any case, if an application is load balancing messages across
> several outgoing links, won't each send result in a FLOW event?
>
> IOW - certain legitimate messaging patterns will result in that 1 Flow for
> every send.  Or am I missing something?
>
>
I think you're right, in that particular case the automatic duplicate
detection that the collector does wouldn't eliminate the extraneous flow
events. I believe we could fix this though by adjusting the logic that
produces those flow events. I.e. first write all outstanding messages onto
the wire and then produce flow events for only those links that were
touched in that process. That strategy should be robust to the usage
pattern you describe.

--Rafael

Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message