qpid-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Rafael Schloming <...@alum.mit.edu>
Subject Re: dispatch router handles 100,000 addresses
Date Thu, 01 May 2014 16:03:20 GMT
On Thu, May 1, 2014 at 10:34 AM, Gordon Sim <gsim@redhat.com> wrote:

> On 05/01/2014 03:09 PM, Rafael Schloming wrote:
>> On Thu, May 1, 2014 at 9:36 AM, Michael Goulish <mgoulish@redhat.com>
>> wrote:
>>> Since I reported earlier that 1 messenger-based sender grew to
>>> 3.4 GB after sending to 30,000 unique addrs, it seems reasonable
>>> that 1000 messenger-based receivers, attempting to receive from a total
>>> of 1,000,000 addrs, would have attempted to grow to a total of more
>>> than 100 GB.  Which would account very nicely for the behavior I saw.
>>> ( The box had 45 GB mem. )
> It would be worth actually confirming the growth of memory as you start
> your receivers. The memory usage on the sender side isn't necessarily the
> same as on the receiver side (depends of course what the memory is being
> used for).
>  The receive side is a bit different, and using qpid-messaging will not
>> necessarily help you scale up. Fundamentally in order to receive messages
>> from N different addresses you need to create N subscriptions. That's
>> going
>> to be just as expensive regardless of which API you use to do it.
> On the client side, an extra subscription shouldn't require a great deal
> of extra memory though, at least I wouldn't expect it to.

Yeah, it would definitely be worth understanding where exactly the memory
usage is coming from. I would actually expect messaging, messenger, and
dispatch router to all have similar scaling characteristics when it comes
to multiple links on a connection. They are all using the engine underneath
and there is a certain minimum amount of per link state that is required by
the protocol, so the expected memory utilization would just be this +
whatever per link information is kept on top of that.

It occurs to me that one possible source of memory consumption might be the
delivery objects held on each link. Currently every link has it's own pool
of delivery objects. This won't be particularly efficient if you've got a
whole lot of links many of which are often idle. It should be pretty
straightforward to use a single connection scoped pool for the delivery
objects. This would probably improve things for dispatch, messenger, and
messaging, and I don't think there would be any drawbacks. I don't know if
this could account for as much memory usage as Mick is reporting though,
and I still don't see why there would be any difference in usage between
dispatch, messaging, and messenger.


  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message