hbase-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Jean-Adrien <a...@jeanjean.ch>
Subject Re: global memcache limit of 396.9m exceeded cause forcing server shutdown
Date Thu, 05 Mar 2009 13:09:37 GMT


About this particular error:

Xiaogang He wrote:
> Caused by: java.lang.OutOfMemoryError: GC overhead limit exceeded
>         at java.util.Arrays.copyOf(Arrays.java:2786)
>         at
> java.io.ByteArrayOutputStream.write(ByteArrayOutputStream.java:71)
>         at java.io.DataOutputStream.writeInt(DataOutputStream.java:182)
>         at
> org.apache.hadoop.hbase.io.ImmutableBytesWritable.write(ImmutableBytesWritable.java:115)

I have recently seen this, and this was caused by a big waste of time made
by the parallel garbage collector, like if the jvm hangs during gc. I don't
know exactly what was the cause, but I disabled it using  -XX:-UseParallelGC
and it works

The jvm in -server mode with default settings uses the parallel gc if you
have a dual core CPU. It uses parallel threads to speed-up gc process. In my
case the jvm used comes from Ubuntu distrib sun-java6-jdk 6-12-0ubun
the HotSpot version:
Java HotSpot(TM) 64-Bit Server VM (build 11.2-b01, mixed mode)

It is maybe not related, but you can check if your concerned jvm is using
the parallel gc or not, using the command
jmap -heap , if it is the case try to disable it, and let me know if it
helps of not. If it does help can you check jour jvm version, and tell me if
it is the same than mine

View this message in context: http://www.nabble.com/global-memcache-limit-of-396.9m-exceeded-cause-forcing-server--shutdown-tp22239311p22351297.html
Sent from the HBase User mailing list archive at Nabble.com.

View raw message