hbase-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From stack <st...@duboce.net>
Subject Re: # of dfs replications when using hbase
Date Fri, 11 Apr 2008 03:55:20 GMT
Rong-en Fan wrote:
>>  >  If you copy your hadoop-conf.xml to $HBASE_HOME/conf, does it then do the
>>  > right thing?  Maybe whats happening is that hbase writing files, we're using
>>  > hadoop defaults.
>>
>>  Yes, I can verify by doing so, HBase respects my customized config. Shall I file
>>  a JIRA against HBase or Hadoop itself?
>>     
>
> When HBase was in hadoop/contrib, the hbase script set both HADOOP_CONF_DIR
> and HBASE_CONF_DIR to CLASSPATH, so that dfs's configuration can be loaded
> correctly. However, when moved out hadoop/contrib, it only sets HBASE_CONF_DIR.
>
> I can think of several possible solutions:
>
> 1) set HADOOP_CONF_DIR in hbase-env.sh, then add HADOOP_CONF_DIR to
>     CLASSPATH as before
> 2) Instruct user to create links for hadoop-*.xml if they want to
> customize some dfs settings.
> 3) If only a small set of dfs confs are related to dfs's client, maybe
> they can be set via
>     hbase-site.xml, then hbase sets these for us when create a FileSystem obj.
>   
Thanks for finding this oversight of ours Rong-en.  Please file a JIRA.  
Make it a blocker for branch and trunk.  At a minimum, we should improve 
our doc. so includes all the suggestions you make above.
Thank you,
St.Ack

Mime
View raw message