mina-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Adam Fisk" <adamf...@gmail.com>
Subject Re: grizzly versus mina
Date Thu, 24 May 2007 22:10:41 GMT
Oh I see.  That is certainly odd.  Maybe the previous post about Tomcat IO
being faster than Java IO is a clue?

On 5/24/07, John Preston <byhisdeeds@gmail.com> wrote:
>
> My thought was that when comparing Glassfish that is built on top of
> Grizzly, and Tomcat with it own NIO engine you only get 10%
> improvement. But when you compare AsyncWeb on top of MINA or Grizzly
> you get a 50% difference. That would tell me that MINA is way slower
> than the IO engine for Tomcat. But I haven't seen this.
>
> John
>
> On 5/24/07, Adam Fisk <adamfisk@gmail.com> wrote:
> > The benchmark was swapping MINA and Grizzly, both using AsyncWeb...  I
> think
> > you're maybe thinking of Grizzly as synonymous with Glassfish?  They
> pulled
> > it out into a generic NIO framework along the lines of MINA.
> >
> > On 5/24/07, John Preston <byhisdeeds@gmail.com> wrote:
> > >
> > > OK. I was looking at the Tomcat vd grizzly benchmark. But then its a
> > > bit strange. If your'e only 10% faster than tomcat but 50% faster than
> > > MINA. That 50% is with AsyncWeb on MINA. So its not a bench mark of
> > > MINA alone the application on MINA.
> > >
> > > I chose MINA for a simple fast scalable server that would server up
> > > data files via HTTP requests and MINA for me at the time (about 1 year
> > > ago) was the quickest, most simple to use. I remember trying tomcat
> > > but it was too big and wasn't that fast for simple responses, so I'm
> > > not sure that the 50% is MINA or AsyncWeb.
> > >
> > > I also agree java.net has some very useful projects, and for me, I
> > > appreciate being able to read other implementation details and see
> > > whether they have any use for me. Also lets remember that SUN, like
> > > everybody else has the right to beat their chest and say they are the
> > > best. Its for us to ignore them when we see that its more ego than
> > > anything substantial.
> > >
> > > Anyway, back to the matter of benchmarks. it might be nice to have a
> > > set of classes that would allow one to create a test of various
> > > operations using MINA, and so from version to version, patches
> > > included, we could keep track of whether we are improving things.
> > >
> > > John
> > > On 5/24/07, Adam Fisk <adamfisk@gmail.com> wrote:
> > > > I hear you.  Sun's generally just annoying.  It would just probably
> be
> > > worth
> > > > taking a look under the hood to see if there's any real magic there
> > > > regardless of all th politics.  Wish I could volunteer to do it, but
> > > I've
> > > > got a startup to run!
> > > >
> > > > Thanks.
> > > >
> > > > -Adam
> > > >
> > > >
> > > > On 5/24/07, Alex Karasulu <akarasulu@apache.org> wrote:
> > > > >
> > > > > Oh yes I agree with you completely.  I was really referring to how
> > > > > benchmarks are
> > > > > being used as marketing tools and published to discredit other
> > > projects.
> > > > > Also I
> > > > > believe that there are jewels at java.net as well.  And you read
> me
> > > right:
> > > > > I'm no fan
> > > > > of SUN nor it's "open source" efforts.
> > > > >
> > > > > <OT>
> > > > > Back in the day when Bill Joy and Scott McNealy were at the helm
I
> had
> > > a
> > > > > profound sense of
> > > > > respect for SUN.  I actually wanted to become an engineer
> there.  Now,
> > > > > IMO,
> > > > > they're a completely
> > > > > different beast driven by marketing rather than engineering
> > > principals.  I
> > > > > feel they resort to base
> > > > > practices that show a different character than the noble SUN I was
> > > used
> > > > > to.
> > > > > It's sad to know that
> > > > > the SUN many of us respected and looked up to has long since died.
> > > > > </OT>
> > > > >
> > > > > Regarding benchmarks they are great for internal metrics and
> shedding
> > > > > light
> > > > > on differences in
> > > > > architecture that could produce more efficient software.  I'm a
> big
> > > fan of
> > > > > competing
> > > > > against our own releases - meaning benchmarking a baseline and
> looking
> > > at
> > > > > the
> > > > > performance progression of the software as it evolves with
> time.  Also
> > > > > testing other
> > > > > frameworks is good for just showing how different scenarios are
> > > handled
> > > > > better
> > > > > with different architectures: I agree that we can learn a lot from
> > > these
> > > > > tests.
> > > > >
> > > > > I just don't want to use metrics to put down other projects.  It's
> all
> > > > > about
> > > > > how you use
> > > > > the metrics which I think was my intent on the last post.  This
> > > perhaps is
> > > > > why I am a
> > > > > bit disgusted with these tactics which are not in line with open
> > > source
> > > > > etiquette but
> > > > > rather the mark of commercially driven and marketing oriented OSS
> > > efforts.
> > > > >
> > > > > Alex
> > > > >
> > > > > On 5/24/07, Adam Fisk <adamfisk@gmail.com> wrote:
> > > > > >
> > > > > > I agree on the tendency to manipulate benchmarks, but that
> doesn't
> > > mean
> > > > > > benchmarks aren't a useful tool.  How else can we evaluate
> > > > > performance?  I
> > > > > > guess I'm most curious about what the two projects might be
able
> to
> > > > > learn
> > > > > > from each other.  I would suspect MINA's APIs are significantly
> > > easier
> > > > > to
> > > > > > use than Grizzly's, for example, and it wouldn't surprise me
at
> all
> > > if
> > > > > > Sun's
> > > > > > benchmarks were somewhat accurate.  I hate Sun's java.netprojects
> > > as
> > > > > much
> > > > > > as the next guy, but that doesn't mean there's not an occasional
> > > jewel
> > > > > in
> > > > > > there.
> > > > > >
> > > > > > It would at least be worth running independent tests.  If the
> > > > > differences
> > > > > > are even close to the claims, it would make a ton of sense to
> just
> > > copy
> > > > > > their ideas.  No need for too much pride on either side!  Just
> seems
> > > > > like
> > > > > > they've put a ton of work into rigorously analyzing the
> performance
> > > > > > tradeoffs of different design decisions, and it might make sense
> to
> > > take
> > > > > > advantage of that.  If their benchmarks are off and MINA
> performs
> > > > > better,
> > > > > > then they should go ahead and copy MINA.
> > > > > >
> > > > > > That's all assuming the performance tweaks don't make the
> existing
> > > APIs
> > > > > > unworkable.
> > > > > >
> > > > > > -Adam
> > > > > >
> > > > > >
> > > > > > On 5/24/07, Alex Karasulu <akarasulu@apache.org> wrote:
> > > > > > >
> > > > > > > On 5/24/07, Mladen Turk <mturk@apache.org> wrote:
> > > > > > > >
> > > > > > > > Adam Fisk wrote:
> > > > > > > > > The slides were just posted from this Java One
session
> > > claiming
> > > > > > > Grizzly
> > > > > > > > > blows MINA away performance-wise, and I'm just
curious as
> to
> > > > > > people's
> > > > > > > > views
> > > > > > > > > on it.  They present some interesting ideas about
> optimizing
> > > > > > selector
> > > > > > > > > threading and ByteBuffer use.
> > > > > > > > >
> > > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > >
> http://developers.sun.com/learning/javaoneonline/j1sessn.jsp?sessn=TS-2992&yr=2007&track=5
> > > > > > > > >
> > > > > > > >
> > > > > > > > I love the slide 20!
> > > > > > > > JFA finally admitted that Tomcat's APR-NIO is faster
then
> JDK
> > > one ;)
> > > > > > > > However last time I did benchmarks it was much faster
then
> 10%.
> > > > > > > >
> > > > > > > > >
> > > > > > > > > Maybe someone could comment on the performance
> improvements in
> > > > > MINA
> > > > > > > > > 2.0?
> > > > > > > >
> > > > > > > > He probably compared MINA's Serial IO, and that is
not
> usable
> > > > > > > > for production (jet). I wonder how it would look with
real
> > > > > > > > async http server.
> > > > > > > > Nevertheless, benchmarks are like assholes. Everyone
has
> one.
> > > > > > >
> > > > > > >
> > > > > > > Exactly!
> > > > > > >
> > > > > > > Incidentally SUN has been trying to attack several projects
> via
> > > the
> > > > > > > performance angle for
> > > > > > > some time now.  Just recently I received a cease and desist
> letter
> > > > > from
> > > > > > > them
> > > > > > > when I
> > > > > > > compiled some performance metrics.  The point behind it
is was
> > > that we
> > > > > > > were
> > > > > > > not correctly
> > > > > > > configuring their products.  I guess they just want to
make
> sure
> > > > > things
> > > > > > > are
> > > > > > > setup to their
> > > > > > > advantage.  That's what all these metrics revolve around
and
> if
> > > you
> > > > > ask
> > > > > > me
> > > > > > > they're not worth
> > > > > > > a damn.  There is a million ways to make one product perform
> > > better
> > > > > than
> > > > > > > another depending
> > > > > > > on configuration, environment and the application.  However
is
> raw
> > > > > > > performance metrics as
> > > > > > > important as a good flexible design?
> > > > > > >
> > > > > > > Alex
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
>

Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message