hbase-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Ted Yu <yuzhih...@gmail.com>
Subject Re: Upgrading a coprocessor
Date Wed, 29 Oct 2014 14:21:08 GMT
Rolling restart of servers may have bigger impact on operations - server
hosting hbase:meta would be involved which has more impact compared to
disabling / enabling user table.

You should give ample timeout to your client. The following is an
incomplete list of configs (you can find their explanation on
http://hbase.apache.org/book.html):

hbase.client.scanner.timeout.period
hbase.rpc.timeout

Cheers

On Tue, Oct 28, 2014 at 11:18 PM, Hayden Marchant <haydenm@amobee.com>
wrote:

> Thanks all for confirming what I thought was happening.
>
> I am considering implementing a pattern similar to Iain's in that I
> version that path of the cp, and disable/enable the table while upgrading
> the cp metadata.
>
> However, what are the operational considerations of disabling a table for
> a number of seconds, versus rolling restart of region servers? Assuming
> that however hard I try, there still might be a process or 2 that are
> accessing that table at that time. What sort of error handling will I need
> to more aware of now (I assume that MapReduce would recover from either of
> these two strategies?)
>
> Thanks,
> Hayden
>
> ________________________________________
> From: iain wright <iainwrig@gmail.com>
> Sent: Wednesday, October 29, 2014 1:51 AM
> To: user@hbase.apache.org
> Subject: Re: Upgrading a coprocessor
>
> Hi Hayden,
>
> We ran into the same thing & ended up going with a rudimentary cp deploy
> script for appending epoch to the cp name, placing on hdfs, and
> disabling/modifying hbase table/enabling
>
> Heres the issue for this: https://issues.apache.org/jira/browse/HBASE-9046
>
> -
>
> --
> Iain Wright
>
> This email message is confidential, intended only for the recipient(s)
> named above and may contain information that is privileged, exempt from
> disclosure under applicable law. If you are not the intended recipient, do
> not disclose or disseminate the message to anyone except the intended
> recipient. If you have received this message in error, or are not the named
> recipient(s), please immediately notify the sender by return email, and
> delete all copies of this message.
>
> On Tue, Oct 28, 2014 at 10:51 AM, Bharath Vissapragada <
> bharathv@cloudera.com> wrote:
>
> > Hi Hayden,
> >
> > Currently there is no workaround. We can't unload already loaded classes
> > unless we make changes to Hbase's classloader design and I believe its
> not
> > that trivial.
> >
> > - Bharath
> >
> > On Tue, Oct 28, 2014 at 2:52 AM, Hayden Marchant <haydenm@amobee.com>
> > wrote:
> >
> > > I have been using a RegionObserver coprocessor on my HBase 0.94.6
> cluster
> > > for quite a while and it works great. I am currently upgrading the
> > > functionality. When doing some testing in our integration environment I
> > met
> > > with the issue that even when I uploaded a new version of my
> coprocessor
> > > jar to HDFS, HBase did not recognize it, and it kept using the old
> > version.
> > >
> > > I even disabled/reenabled the table - no help. Even with a new table,
> it
> > > still loads old class. Only when I changed the location of the jar in
> > HDFS,
> > > did it load the new version.
> > >
> > > I looked at the source code of CoprocessorHost and I see that it is
> > > forever holding a classloaderCache with no mechanism for clearing it
> out.
> > >
> > > I assume that if I restart the region server it will take the new
> version
> > > of my coprocessor.
> > >
> > > Is there any workaround for upgrading a coprocessor without either
> > > changing the path, or restarting the HBase region server?
> > >
> > > Thanks,
> > > Hayden
> > >
> > >
> >
> >
> > --
> > Bharath Vissapragada
> > <http://www.cloudera.com>
> >
>

Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message