velocity-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Christopher Schultz <>
Subject Re: [Object] duplication
Date Wed, 11 May 2011 21:01:25 GMT

Oh, I forgot to mention the buffers as well. I have 72 copies of
int[4096] objects which appear to be nothing other than { 0x00, 0x00,
0x00, ... }.

These are held by
references. Presumably, after a template is parsed, lots of these
buffers can be discarded, right?

Same with org.apache.velocity.runtime.parser.VelocityCharStream.buffer.

It looks like Velocity holds onto a lot of parser information after the
parse step has been completed. Certainly, much of this is useful for
error reporting, etc. but I wonder if it can be lightened in some way.

YoutKit can search for sparse arrays and has found a number of them
stored in org.apache.velocity.runtime.parser.JJTParserState.nodes which
is a java.util.Stack object with 320 elements, nearly all of which are
null. Perhaps those stacks could be trimmed in size if they are

I can also see that, in the introspection cache, there are a lot of
Class[] arrays of zero-length. Presumably there are many methods
introspected that have no arguments and so Class[0] will be popular. If
all of those could share a single instance of a Class[0] object, you
could save a bit of memory there as well.

Sorry for the laundry list. I've just buried in the profiler right now
and I can see all of it. ;)


On 5/11/2011 4:48 PM, Christopher Schultz wrote:
> All,
> So, two things before I get started:
> 1. I know I have access to the source. I know I'm a committer.
> 2. I'm using Velocity 1.4
> I've recently been doing a lot of memory profiling of my webapp and it
> looks like Velocity is responsible for a lot of stuff hanging around in
> memory.
> Of course, we have more than 200 templates that we use regularly, so
> eventually they will all be parsed and end up in memory, so that's no
> surprise.
> What is surprising is the number of strings and buffers that I can see
> duplicated in memory. Here's a good example:
> We have a Velocimacro, defined in VM_global_library.vm, that looks
> something like this:
> #macro(stdFormLabel $formName $fieldName $bundle)
>     <label id="${fieldName}_label"
> for="$fieldName"#if($msg.exists("form.${formName}.${fieldName}.accelerator",
> $bundle))
> accesskey="$msg.get("form.${formName}.${fieldName}.accelerator",
> $bundle)"#end class="field">
>         $msg.get("form.${formName}.${fieldName}.label", $bundle)
>     </label>
> #end
> Basically, it builds a <label> element and grabs the actual text from a
> resource bundle. Well, it turns out that we have 630 in-memory instances
> of this string:
>   $msg.get("form.${formName}.${fieldName}.label", $bundle)
> That only represents about 80k of memory, so it's not really a big deal.
> Here's another one that gets repeated a lot:
> " onclick="var win = + (0 &lt;=
> this.href.indexOf('?') ? '&' : '?') + 'popup=true', 'chadis_help',
> 'width=300,height=300,toolbar=no,directories=no,menubar=no');
> win.focus(); return false;"><img alt="[Help]" src="
> We have 500 of those in memory for a waste of more like 256k. Also not a
> big deal.
> This next one starts to become more of a big deal: there are 1MiB /each/
> of ")" and " " strings linked from
> org.apache.velocity.runtime.parser.Token objects.
> It's leading me to believe that there is an opportunity for potentially
> significant memory savings here if we do some String interning.
> I'm not suggesting that we use java.lang.String.intern because that
> would pin those String objects into memory until the JVM closes.
> Instead, I'm suggesting that maybe we implement our own String
> interning, at least for some things.
> I recently did the same thing to one of my own projects that was also
> parsing text and storing some strings from that text (an expression
> parser and evaluator), and the savings were pretty dramatic: I was able
> to save about 13MiB of heap space by interning these String objects
> which reduced the in-memory footprint of the whole library (including
> data) from about 20MiB to just 7MiB.
> I think something like this would make Velocity a leaner and meaner tool.
> Comments?
> -chris

View raw message