hbase-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From 梁景明 <futur...@gmail.com>
Subject Re: something wrong with hbase mapreduce
Date Mon, 06 Dec 2010 03:35:21 GMT
it is just a column data, if it's so slow to do the thing , i cann't use  it
in my case.

i am trying javacode to delete instead of using the shell to delete data .

javacode is ok. i am thinking weather the shell delete thing setting some
current timestamp in hbase.

so, when i put the data timestamp before current  , that would not set.

i am not sure about this.

thanks any way ,

2010/12/3 Lars George <lars.george@gmail.com>

> Did you check that the compaction was completed? Please "tail -f" or
> so the master log to see when it is done. You may have tried too quick
> for it to do its magic.
>
> On Fri, Dec 3, 2010 at 3:40 AM, 梁景明 <futureha@gmail.com> wrote:
> > here is the thing what i do ,i think not only mapreduce error .
> >
> > shell_1:
> > create 't1','f'
> >
> >
> > shell_2:
> > scan 't1'
> >
> > shell_3:
> > deleteall 't1','a'
> >
> > shell_4:
> > major_compact 't1'
> >
> > javacode:
> >        HBaseConfiguration hconf = new HBaseConfiguration(new
> > Configuration());
> >        hconf.addResource(HbaseMapr.class.getResource("/hbase-site.xml"));
> >            HTable ptable = new HTable(hconf, "t1".getBytes());
> >            Put p = new Put("a".getBytes());
> >            Date d = DateUtil.getDate("yyyy-MM-dd", "2010-10-24");
> >            p.add("f".getBytes(), Bytes.toBytes(d.getTime()), d.getTime(),
> >                    "o1".getBytes());
> >            d = DateUtil.getDate("yyyy-MM-dd", "2010-10-23");
> >            p.add("f".getBytes(),
> > Bytes.toBytes(String.valueOf(d.getTime())), d.getTime(),
> >                    "o2".getBytes());
> >            ptable.put(p);
> >            ptable.flushCommits();
> >
> > 1、i ran shell_1 to create table
> > 2、i ran javacode to put some simple data into hbase
> > 3、i ran shell_2
> > ------------------------------------------------
> > ROW
> > COLUMN+CELL
> >
> >  a                           column=f:1287763200000,
> > timestamp=1287763200000, value=o2
> >  a                           column=f:1287849600000,
> > timestamp=1287849600000, value=o1
> > --------------------------------------------------
> > 4、i ran shell_3 and shell_2 delete data
> > ---------------------------------------------------
> > ROW                          COLUMN+CELL
> > --------------------------------------------------------
> > 5、i ran javacode again.
> > 6、i ran shell_2 to scan ,insert data failed
> > ----------------------------------------------------------
> > ROW                          COLUMN+CELL
> > ----------------------------------------------------------
> > 7、i ran shell_4
> > 8、i ran javacode again.
> > 9、i ran shell_2 to scan ,insert data failed
> > ----------------------------------------------------------
> > ROW                          COLUMN+CELL
> > ----------------------------------------------------------
> >
> > 在 2010年12月2日 下午6:17,Lars George <lars.george@gmail.com>写道:
> >
> >> So you are using explicit time stamps for the put calls? Is this related
> to
> >>
> >> https://issues.apache.org/jira/browse/HBASE-3300
> >>
> >> by any chance? You have to be extra careful with explicit timestamps
> >> as newer deletes can mask readding puts that have an older timestamp.
> >>
> >> Try this:
> >>
> >> 1. Do the MR job
> >> 2. Do the delete from the shell
> >> 3. Check that it was deleted from the shell
> >> 4. Run a major compaction of the table on the shell (e.g.
> >> "major_compact <tablename>")
> >> 5. Re-run the MR job
> >> 6. Check if the value is there again.
> >>
> >> And finally let us know here :)
> >>
> >> Lars
> >>
> >> On Thu, Dec 2, 2010 at 2:48 AM, 梁景明 <futureha@gmail.com> wrote:
> >> > 0.20.6
> >> >
> >> > 2010/12/2 Lars George <lars.george@gmail.com>
> >> >
> >> >> What version of HBase are you using?
> >> >>
> >> >> On Dec 1, 2010, at 9:24, 梁景明 <futureha@gmail.com> wrote:
> >> >>
> >> >> > i found  that if i didnt control  timestamp of the put
> >> >> > mapreduce can run, otherwise just one time mapreduce.
> >> >> > the question is i scan by timestamp  to  get my data
> >> >> > so to put timestamp is my scan thing.
> >> >> >
> >> >> > any ideas  ? thanks.
> >> >> >
> >> >> > 2010/12/1 梁景明 <futureha@gmail.com>
> >> >> >
> >> >> >> Hi,i found a problem in my hbase mapreduce case.
> >> >> >>
> >> >> >> when first running mapreduce TableMapReduceUtil runs ok.
> >> >> >>
> >> >> >> and i use hbase shell to delete some data from the table that
> >> mapreduce
> >> >> one
> >> >> >> .
> >> >> >>
> >> >> >> then ran mapreduce to insert some new data.
> >> >> >>
> >> >> >> no thing data changed, mapreduce didnt work.
> >> >> >>
> >> >> >> after that i  drop the table and  recreate it
> >> >> >>
> >> >> >> run mapreduce again ,data  inserted successful.
> >> >> >>
> >> >> >> what happen to mapreduce .
> >> >> >>
> >> >> >> Is it only can insert table just one time?
> >> >> >>
> >> >>
> >> >
> >>
> >
>

Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message