hbase-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Ted Yu <yuzhih...@gmail.com>
Subject Re: CMF & NodeIsDeadException
Date Mon, 03 Jan 2011 20:55:16 GMT
In an ideal minor collection the live objects are copied from one part of
the *young* generation (the eden space plus the first survivor space) to
another part of the *young* generation (the second survivor space).

See:
http://java.sun.com/docs/hotspot/gc1.4.2/

On Mon, Jan 3, 2011 at 12:50 PM, Wayne <wav100@gmail.com> wrote:

> We have an 8GB heap. What should newsize be? I just had another node die
> hard after going into a CMF storm. I swear it had solid CMFs 30+ in a row.
>
> I have no idea what eden space is or how to see what it is. ??
>
> Not knowing what else to do I will start using some of the Cassandra
> settings I used to improve it by setting the occupancy fraction. Any other
> ideas???
>
> Thanks.
>
> On Mon, Jan 3, 2011 at 12:40 PM, Stack <stack@duboce.net> wrote:
>
> > zookeeper.session.timeout is the config. to toggle.  Its set to
> > 180seconds in 0.90.0RC.  Is it not so in your deploy?
> >
> > On Mon, Jan 3, 2011 at 5:13 AM, Wayne <wav100@gmail.com> wrote:
> > >
> > > Any help or suggestions would be appreciated. Parnew was getting large
> > and
> > > taking too long (> 100ms) so I will try to limit the size with the
> > > suggestion from the performance tuning page (-XX:NewSize=6m
> > > -XX:MaxNewSize=6m).
> > >
> >
> > The CMS concurrent mode failure will be about trying to promote from
> > new space up into the tenured heap but there's not the space in
> > tenured heap to take the promotion because of fragmentation.  You
> > could try putting an upper bound on the new size (What size had your
> > eden space grown too?).  That would put off the CMF some but in long
> > running app., CMF seems unavoidable, yeah.
> >
> > A newsize of 6M is way too small given the heap sizes you've been
> > bandy'ing about (You were thinking 64M?  Even then, that seems too
> > small).
> >
> > On failure of the node, all the regions came up again on new servers OK?
> >
> > St.Ack
> >
>

Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message