james-server-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Stefano Bagnara <apa...@bago.org>
Subject Re: OutOfMemory exception
Date Thu, 01 Apr 2010 07:44:32 GMT
2010/4/1 Eric Charles <eric.charles@u-mangate.com>:
> Caused by: org.apache.openjpa.lib.jdbc.ReportingSQLException: Java
> exception: 'Java heap space: java.lang.OutOfMemoryError'. {prepstmnt
> 1363215207 INSERT INTO Message (id, bodyStartOctet, content, contentOctets,
> [...]
> So still a OOM exception that was shown by yet-another-component (in this
> case, the StoreMailbox).

OOM are shown by whichever component needs memory once the memory is
exausted. So there's almost no point in taking into consideration the
exception stacktrace when an OOM happens in a complex system.

OOM are the results of (a) real insufficient memory (too big memory
requirements), (b) memory leaks.
So, either some component is configured to use more memory than the
available or some component does not free resources. I guess we are in

So, either you go for a full profiler, or you at least take heap dumps.

We have to know if memory usage grows constantly to OOM, or if you
have very frequent GC that free space but then once in a while it is
not enough and it throws the OOM, if the memory is full of unused
objects from the same class or instead a full tree of different

If you don't go for a full profiler, jmap -histo, jmap -dump, jstat,
jmap, jconsole are your friends here.

Also, add the -XX:+HeapDumpOnOutOfMemoryError parameter to your jvm,
so that you have an automatic heap dump on OOM (you can also set this
"live" with jinfo)

Also some other "guessed" information can help: the memory usage is
proportional to the processed message? To their sizes? To the uptime?
To the failed message..etc.

> There were only 4 .m64 files in /tmp (the ValidRcptHandler is doing its
> job).
> All 4 files were 0 bytes.
> I have now launched with EXTRA_JVM_ARGUMENTS="-Xms512m -Xmx4g" (so 4GB max
> memory).
> With the previous parameters ( -Xmx512m), the process was taking the whole
> 512MB.

Increasing the memory is rarely of help in this case: this will only
help if we are in the "(a)" scenario (some component configured to use
more memory than we thought). You'll probably get the OOM anyway, but
it will take more time. If this happen we cat then exclude (a) and go
for (b) analysis.


To unsubscribe, e-mail: server-dev-unsubscribe@james.apache.org
For additional commands, e-mail: server-dev-help@james.apache.org

View raw message