velocity-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Nathan Bubna" <nbu...@gmail.com>
Subject Re: Velocity Performance and Concurrency issues
Date Tue, 29 Jul 2008 00:52:17 GMT
On Mon, Jul 28, 2008 at 4:34 PM, Raymond Auge <rauge@liferay.com> wrote:
> Ok, after more review of Velocity code and also our usage, it occurs to me
> that what we may be doing wrong is not using the "toolbox" paradigm.
>
> Essentially what we want is complete integration, such that we handle all
> the request/response and all we want from Velocity is it's processing &
> output which we merge as we need throughout the rendering cycle.
>
> We essentially have three tiers at which we delegate to Velocity rendering:
>
> - wrapping a given portlet (this is like drawing a window, with menus,
> status bar, etc., in a GUI toolkit)
> - processing the layout of the page (like a desktop refresh which positions
> the individual portlets where they go on the page)
> - processing our dynamic CMS content within arbitrary portlets on any page
> (these are your standard contents any website usually has, news articles,
> ads, etc.)
>
> So, within a given request to the portal you will pass through at least two
> of these tiers, and usually N portlet wrappings (1 per portlet on the given
> page), and 1 layout rendering (, possibily N CMS content renderings). Each
> tier is completely independent of the other, has different set of params.
>
> For example: Suppose we have a page with 20 news articles (rendered using
> VTL) in 20 portlets (wrappings rendered separately using VTL) on a two
> column layout (rendered in VTL).
>
> That makes 20+20+1 different times when we do:
>
> VelocityContext vc = new VelocityContext();
> ... add all the tools
> ... render
>
> during a single request.
>
> (Keep in mind this is a worst case scenario, because we do have
> caching...BUT...)
>
> Even the set of utility functions (tool classes) available during each may
> be slightly different.
>
> So, I believe that for our scenario, we should probably be using the
> "toolbox" approach because we are re-creating and re-populating the same
> list of tools (and params) into a new context on every request.

yeah, sounds like it would reduce the per-request workload.

> We might have three re-usable toolbox configurations, one for each type of
> Velocity usage. Our tools are all thread safe already, so that's not an
> issue.
>
> So, if we were to do this, would you expect that we would decrease the
> contention on the method cache?

hmm.  my guess is that it wouldn't.  the method cache is
class-oriented, not instance-oriented.  honestly, i've been staring at
this stuff a lot in the last week, and i don't think MethodCache needs
to be synchronized at all, especially after the changes i've made in
1.6-dev so far.  i can't see any problem but some "wasted" effort if
multiple threads happen to get past cache.get(foo) before a call to
cache.put(foo).  it neither matters if foo is overridden, nor if the
code to create/lookup foo is repeated.

so, where does that leave you?  obviously, i'm an advocate of the
"tool" approach :), so i think you should do that either way.  if you
don't use many Velocimacros (or use simple, small ones), then i think
you should move to Velocity 1.6-dev (especially, as i'm about to drop
method cache synchronization).  if you do use velocimacros
extensively, then you may want to wait for VELOCITY-607 to be resolved
or you may want to try and create your own fork of Velocity 1.5 that
brings in at least the introspector improvements that have been added
to 1.6-dev.

> Thanks for your input,
>
> Raymond
>
> On Thu, 2008-07-24 at 22:44 -0700, Nathan Bubna wrote:
>
> On Thu, Jul 24, 2008 at 10:31 PM, Raymond Auge <rauge@liferay.com> wrote:
>> Hello Nathan,
>>
>> We might be willing to move to 1.6-dev, but it really depends on it's
>> stability.
>>
>> How would you compare it to the current 1.5 release?
>
> a few less bugs, much less memory use, generally faster, and has new
> toys like support for vararg method calls and calling List methods on
> arrays. :)
>
>> Is it as stable?
>
> if API stability is what you are curious about, the only external API
> that i recall offhand as being changed is the StringResourceLoader, as
> the 1.5 version was broken.  if by "as stable" you mean "as reliable",
> i think that it is, but i don't use it in any situations where it is a
> high load bottleneck for me.  so, my opinion there probably means less
> than you just trying it out yourself. :)
>
>> Ray
>>
>> On Thu, 2008-07-24 at 22:20 -0700, Nathan Bubna wrote:
>>
>> On Thu, Jul 24, 2008 at 10:15 PM, Raymond Auge <rauge@liferay.com> wrote:
>>> Hello Nathan,
>>>
>>> I just finished writing a alternate UberspectImpl based on our own
>>> MethodCache implementation. I'll let you know if we notice any
>>> significant changes in performance.
>>
>> Please do, and if so, would you be willing to share your code too?
>>
>>> Ray
>>>
>>> PS: We had already done some tweaking to use ConcurrentHashMap and
>>> removed some sync blocks in the cache... but we still hit a bottleneck.
>>
>> If you're willing to do such tweaks, then i'd highly recommend
>> starting with the current head version (Velocity 1.6-dev).  as has
>> been said, there have already been a lot of performance tweaks made,
>> and there are more in the pipeline already, just waiting on some
>> confirmation (see VELOCITY-606 and VELOCITY-595 for those).
>>
>>>
>>> On Thu, 2008-07-24 at 22:02 -0700, Nathan Bubna wrote:
>>>
>>>> On Thu, Jul 24, 2008 at 2:53 PM, Raymond Auge <rauge@liferay.com> wrote:
>>>> #snip()
>>>> > Under heavy load we hit a max throughput and thread dumps during this
>>>> > time are completely filled with BLOCKED threads as bellow:
>>>> >
>>>> > [snip]
>>>> > "http-80-Processor47" daemon prio=10 tid=0x00002aabbdb90400 nid=0x5a59
>>>> > waiting for monitor entry [0x0000000044c72000..0x0000000044c74a80]
>>>> >   java.lang.Thread.State: BLOCKED (on object monitor)
>>>> >        at
>>>> >
>>>> >
>>>> > org.apache.velocity.util.introspection.IntrospectorBase.getMethod(IntrospectorBase.java:103)
>>>> >        - waiting to lock <0x00002aaad093d940> (a
>>>> > org.apache.velocity.util.introspection.IntrospectorCacheImpl)
>>>> >        at
>>>> >
>>>> >
>>>> > org.apache.velocity.util.introspection.Introspector.getMethod(Introspector.java:101)
>>>> #snip()
>>>>
>>>> I do find it interesting that there is so much blocking going on at
>>>> this particular point.  I didn't appearing all that high on any of the
>>>> profiler outputs yet.  Perhaps that's just oversight on my part or
>>>> perhaps that may be because of the heavy evaluate() use in this
>>>> particular case, but still, if we can find a way to speed it up, that
>>>> would be good nonetheless.   I'll look into it a bit.  It may turn out
>>>> to be another spot that mostly needs to wait for the JDK 1.5
>>>> concurrency classes, but perhaps there is something that can be done.
>>>>  I do notice right off the bat that the synchronization of the get()
>>>> and put() methods of IntrospectorCacheImpl seems unnecessary as they
>>>> are being used within a block synchronized on their instance.  With
>>>> re-entrant synchronization that might not make a big difference, but
>>>> it's something.  I bet we could also be more fine-grained here and
>>>> synchronize on something like the Class being introspected.
>>>>
>>>> ---------------------------------------------------------------------
>>>> To unsubscribe, e-mail: user-unsubscribe@velocity.apache.org
>>>> For additional commands, e-mail: user-help@velocity.apache.org
>>>>
>>>>
>>>
>>> ----------------------------------
>>> Raymond Augé
>>> Software Engineer
>>> Liferay, Inc.
>>> Enterprise. Open Source. For Life.
>>> ----------------------------------
>>>
>>> Liferay Meetup 2008 – Los Angeles
>>>
>>> August 1, 2008
>>>
>>> Meet and brainstorm with the creators of Liferay Portal, our partners
>>> and other members of our community!
>>>
>>> The day will consist of a series of technical sessions presented by our
>>> integration and services partners. There is time set aside for Q&A and
>>> corporate brainstorming to give the community a chance to give feedback
>>> and make suggestions!
>>>
>>> View Event Details
>>>
>>> Register Now
>>>
>>
>> ---------------------------------------------------------------------
>> To unsubscribe, e-mail: user-unsubscribe@velocity.apache.org
>> For additional commands, e-mail: user-help@velocity.apache.org
>>
>>
>> ----------------------------------
>> Raymond Augé
>> Software Engineer
>> Liferay, Inc.
>> Enterprise. Open Source. For Life.
>> ----------------------------------
>>
>> Liferay Meetup 2008 – Los Angeles
>>
>> August 1, 2008
>>
>> Meet and brainstorm with the creators of Liferay Portal, our partners and
>> other members of our community!
>>
>> The day will consist of a series of technical sessions presented by our
>> integration and services partners. There is time set aside for Q&A and
>> corporate brainstorming to give the community a chance to give feedback
>> and
>> make suggestions!
>>
>> View Event Details
>>
>> Register Now
>
> ----------------------------------
> Raymond Augé
> Software Engineer
> Liferay, Inc.
> Enterprise. Open Source. For Life.
> ----------------------------------
>
> Liferay Meetup 2008 – Los Angeles
>
> August 1, 2008
>
> Meet and brainstorm with the creators of Liferay Portal, our partners and
> other members of our community!
>
> The day will consist of a series of technical sessions presented by our
> integration and services partners. There is time set aside for Q&A and
> corporate brainstorming to give the community a chance to give feedback and
> make suggestions!
>
> View Event Details
>
> Register Now

---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscribe@velocity.apache.org
For additional commands, e-mail: user-help@velocity.apache.org


Mime
View raw message