hbase-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Rong-en Fan" <gra...@gmail.com>
Subject Re: # of dfs replications when using hbase
Date Fri, 11 Apr 2008 01:32:35 GMT
On Fri, Apr 11, 2008 at 1:14 AM, stack <stack@duboce.net> wrote:
> Rong-en Fan wrote:
>
> > I did so. I even rm -rf on dfs's dir and do namenode -format
> > before starting my dfs. hadoop fsck reports the default replication
> > is 1, avg. block replication is 2.9x after I wrote some data into
> > hbase. The underlying dfs is used by hbase. No other apps on
> > it.
> >
> >
>
>  What if you add a file using './bin/hadoop fs ....' -- i.e. don't have
> hbase in the mix at all -- does the file show as replicated?

It's 1 replication.

>  If you copy your hadoop-conf.xml to $HBASE_HOME/conf, does it then do the
> right thing?  Maybe whats happening is that hbase writing files, we're using
> hadoop defaults.

Yes, I can verify by doing so, HBase respects my customized config. Shall I file
a JIRA against HBase or Hadoop itself?

> > Hmm... as far as I understand the hadoop FileSystem, you can
> > specify # of replication when creating a file. But I did not find hbase
> > use it, correct?
> >
> >
>
>  We don't do it explicitly, but as I suggest above, we're probably using
> defaults instead of your custom config.
>
>  St.Ack

Thanks,
Rong-En Fan

Mime
View raw message