hbase-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Amit Sela <am...@infolinks.com>
Subject Re: Bulk load from OSGi running client
Date Sun, 08 Sep 2013 16:14:18 GMT
First issue I found was that I didn't bundle the libhadoop.so in my hadoop
bundle (I saw a lot of "Got brand new decompressor" in the log), that is
fixed now.

The main issue still remains, it looks like Compression.Algortihm
configuration's class loader had reference to the bundle in revision 0
(before jar update) instead of revision 1 (after jar update). This could be
because of caching (or static) but then, why should it work after I get
NullPointerException (it does, immediately, no restarts or bundle updates).

If anyone has any idea please share, I will keep posting my findings.

Thanks,

Amit.




On Sun, Sep 8, 2013 at 3:57 AM, Stack <stack@duboce.net> wrote:

> On Sun, Sep 8, 2013 at 4:19 AM, Amit Sela <amits@infolinks.com> wrote:
>
> > I did some debug and I have more input about my issue. The Configuration
> in
> > Compression.Algorithm has a class loader that holds reference to the
> > original package (loaded at restart) and not to e current one (loaded
> after
> > package update). Is the compression algorithm cached somewhere such that
> > after a first time read (get, scan) from hbase, the following uses use a
> > cached instance ?
>
>
> Yes.  Does this rather than reload each time.
>
> Let me know if you need more help getting this all up and running.  Am
> interested in your findings.
>
> St.Ack
>

Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message