qpid-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Rupert Smith" <rupertlssm...@googlemail.com>
Subject Re: Weekly plans.
Date Wed, 07 Nov 2007 17:45:25 GMT
What size of messages are you maxing the 1Gig connection at? Obviously its
easy to do with big messages. I will attempt to guess, assuming the test is
pubsub 1:x, with x large enough that I can assume the broadcast traffic is
what is consuming the bandwidth.

1Gbit/sec / 8 bits/byte / 176k msgs/sec = approx 710 bytes/msg

Are you running p2p or pubsub tests, and if pubsub what is the fanout ratio
(1:x)?

The fastest I've seen the Java M2 go on pubsub 2:16 is around 100k msgs/sec
w. 256 byte messages. Although, I feel it could go faster because I was
testing with just one client machine, and the CPU maxed out on the client
and not the broker well before the connection was saturated :(

I have been doing a bit of comparison of M2 with other middleware products.
Generally speaking to compare products, I use small messages (settling on
256 bytes as a standard for all tests), because large messages will reach IO
bounds and test the hardware not the middleware. So far, we hold up pretty
well.

I think one of the best direct comparison between two brokers, is to do a
transient 1:1 p2p test, but scale it up 8 or 16 times, so its 8:8 or 16:16
across that many separate queues. This gives the broker a good opportunity
to scale over many cores, but also tests the full service time to route each
message for every message (contrasted with pubsub where each message might
be routed once, then pumped out onto the network multiple times). Ultimately
it is this service time that matters. Doing p2p with small messages uses
more CPU/message on the broker side, therefore gives you the best feel for
the efficiency of the software and the best chance of avoiding saturating
the hardware. Pubsub produces bigger, and therefore more impressive numbers,
but I do think p2p is better for comparison (unless you want to test the
efficiency of topics/selectors, which is also worth comparing).

Likewise, in persistent mode, for p2p with small messages, the limiting
factor is the disk latency, so the test uncovers how good the disk
store/fetch algorithm is wrt the disks max IO operations per second. Again
this shows up the differences in algorithms used by different middleware
quite nicely. Best I have seen so far was SwiftMQ which managed to write
batch 8k msgs/sec in auto ack mode, 16:16 p2p, on a disk setup that can
handle maybe 500 IOPS (very rough estimate), which is impressive.

To do a direct compare, suggest you use the same hardware setup for all
tests. Build the perftests on M2, under java/perftests:

mvn install uk.co.thebadgerset:junit-toolkit-maven-plugin:tkscriptgenassembly:directory
(or you could use assembly:assembly to create a .tar.gz)

cd target/qpid-perftests-1.0-incubating-M2-all-test-deps.dir

run the test cases:

./TQBT-TX-Qpid-01
.... through to
./PTBT-AA-Qpid-01

detailed in the pom.xml. TQBT-TX stands for Transient Queue Benchmark
Throughput w Transactions, PTBT stands for Persistent Topic Benchmark
Throughput w AutoAck, etc. An example run might look like:

./TQBT-TX-Qpid-01 broker=tcp://10.0.0.1:5672 -o resultsdir/ --csv

Also, perftest stuff is most up-to-date on M2.1, both the test code and the
numbers in the generated scripts in the pom.xml (which have taken a lot of
tweaking to get right). M2.1 perftests has been updated to use pure JMS,
like Arnaud did for trunk, but I have also put in a few fixes into it that
have not been merged onto trunk. I think I should probably merge all these
changes from M2.1 onto M2 and trunk, to make direct comparison easier.

Rupert

On 07/11/2007, Carl Trieloff <cctrieloff@redhat.com> wrote:
>
> Robert Greig wrote:
> > On 07/11/2007, Arnaud Simon <asimon@redhat.com> wrote:
> >
> >
> >> This week I will be adding dtx and crash recovery tests, I will also be
> >> looking at optimizing the java 0_10 client.
> >>
> >
> > Do you have any performance test results for the 0-10 client?
> >
> > RG
> >
>
> As all the clients are to the C++ broker -there is what is the broker
> capable of and then how close is the
> client for each language. I still don't have enough data to quote for
> each component.
>
> looks like broker client C++ for publish can max TCP on Gig (176k
> msg/sec) for the size of message
> my test is using and it consume 1 core of CPU time to do this. Consume
> does not show symmetric rate  -- still
> working out if broker or client lib.
>
> also don't think this is max - i.e. IB should be much faster - the
> number above is limited by the specific network
> I am running on. one of the upcoming tests will most likely be to 'cat'
> the full conversation to the socket / IO
> buffer on the local machine to determine the top limit if the machine
> had multiple NICs or on IB. and find out
> where the consume issue is.... (think Alan is hatching a plan to try that)
>
> what are the rate / message size / CPU you are seeing on M2? - would
> like to do a direct compare.
> Carl.
>
>
>
>
>
>
>
>

Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message