ode-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Matthieu Riou" <matthieu.r...@gmail.com>
Subject Re: Ode Performance: Round I
Date Wed, 06 Jun 2007 22:13:55 GMT
Actually for in-memory processes, it would save us all reads and writes (we
should never read or write it in that case). And for persistent processes,
then it will save a lot of reads (which are still expensive because of
deserialization).

On 6/6/07, Matthieu Riou <matthieu.riou@gmail.com> wrote:
>
> Two things:
>
> 1. We should also consider caching the Jacob state. Instead of always
> serializing / writing and reading / deserializing, caching those states
> could save us a lot of reads.
>
> 2. Cutting down the transaction count is a significant refactoring so I
> would start a new branch for that (maybe ODE 2.0?). And we're going to
> need a lot of tests to chase regressions :)
>
> I think 1 could go without a branch. It's not trivial but I don't think it
> would take more than a couple of weeks (I would have to get deeper into the
> code to give a better evaluation).
>
> On 6/6/07, Alex Boisvert <boisvert@intalio.com> wrote:
> >
> > Howza,
> >
> > I started testing a short-lived process implementing a single
> > request-response operation.  The process structure is as follows:
> >
> > -Receive Purchase Order
> > -Do some assignments (schema mappings)
> > -Invoke CRM system to record the new PO
> > -Do more assignments (schema mappings)
> > -Invoke ERP system to record a new work order
> > -Send back an acknowledgment
> >
> > Some deployment notes:
> > -All WS operations are SOAP/HTTP
> > -The process is deployed as "in-memory"
> > -The CRM and ERP systems are mocked as Axis2 services (as dumb as can be
> > to
> > avoid bottlenecks)
> >
> > After fixing a few minor issues (to handle the load), and fixing a few
> > obvious code inefficiencies which gave us roughly a 20% gain, we are now
> > near-100% CPU utilization.  (I'm testing on my dual-core system)   As it
> > stands, Ode clocks about 70 transactions per second.
> >
> > Is this good?  I'd say there's room for improvement.  Based on previous
> > work
> > in the field, I estimate we could get up to 300-400 transactions/second.
> >
> > How do we improve this?  Well, looking at the end-to-end execution of
> > the
> > process, I counted 4 thread-switches and 4 JTA transactions.  Those are
> > not
> > really necessary, if you ask me.  I think significant improvements could
> > be
> > made if we could run this process straight-through, meaning in a single
> > thread and a single transaction.  (Not to mention it would make things
> > easier to monitor and measure ;)
> >
> > Also, to give you an idea, the top 3 areas where we spend most of our
> > CPU
> > today are:
> >
> > 1) Serialization/deserialization of the Jacob state (I'm evaluating
> > about
> > 40-50%)
> > 2) XML marshaling/unmarshaling (About 10-20%)
> > 3) XML processing:  XPath evaluation + assignments (About 10-20%)
> >
> > (The rest would be about 20%; I need to load up JProbe or DTrace to
> > provide
> > more accurate measurements.  My current estimates are a mix of
> > non-scientific statistical sampling of thread dumps and a quick run with
> > the
> > JVM's built-in profiler)
> >
> > So my general question is...  how do we get started on the single thread
> > +
> > single transaction refactoring?    Anybody already gave some thoughts to
> >
> > this?  Are there any pending design issues before we start?  How do we
> > work
> > on this without disrupting other parts of the system?  Do we start a new
> > branch?
> >
> > alex
> >
>
>

Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message