hbase-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Amandeep Khurana <ama...@gmail.com>
Subject Re: HBase-0.20.0 multi read
Date Fri, 21 Aug 2009 08:17:35 GMT
On Fri, Aug 21, 2009 at 1:12 AM, <y_823910@tsmc.com> wrote:

> You mean my PCs are not good enough to run HBase well ?


Thats right.. HBase is a RAM hogger. The nodes in my cluster have 8GB RAM
each and its low... I run into trouble because of that.


>
> I've put 5 oracle tables to HBase successfully , the biggest table record
> count is only 50,000.


Thats a small data set. Not much.


>
> Is there a client request limit for region server?


Good question. I dont have an answer straight away. However, I think its got
to be related to the RPC handlers. I'd wait for someone else to answer this
more correctly.


>
> Two region server just serve 5 clients, it's a little strange!
> Any suggestion hardware spec for HBase?
> For that spec, how many clients can fetch data from HBase  concurrently?
>

Depends on your use case. What are you trying to accomplish with HBase? In
any case, you would need about 8-9 nodes to have a stable setup.


>
> Fleming
>
>
>
>
>
>                      Amandeep Khurana
>                      <amansk@gmail.com        To:
> hbase-user@hadoop.apache.org
>                      >                        cc:      (bcc: Y_823910/TSMC)
>                                               Subject: Re: HBase-0.20.0
> multi read
>                       2009/08/21 03:49
>                      PM
>                       Please respond to
>                      hbase-user
>
>
>
>
>
>
> On Fri, Aug 21, 2009 at 12:45 AM, <y_823910@tsmc.com> wrote:
>
> >
> > I have 3 PC cluster.(pc1 , pc2 , pc3)
> > Hadoop master (pc1), 2 slaves (pc2,pc3)
> >
> > HBase and ZK running on pc1, two region servers (pc2,pc3)
> >
> > pc1 : Intel core2 , 2.4GHz , RAM 1G
> >
> > pc2 : Intel core2 , 2.4GHz , RAM 1G
> >
> > pc3 : Intel core2 , 1.86GHZ, RAM 2G
> >
>
> This is a very low config for HBase. I doubt if you'll be able to get a
> remotely stable hbase instance going in this. More so, if you are trying to
> test how much load it can take...
>
>
> >
> > -----------------------------------------------------------
> >
> > hbase-env.sh
> >  export HBASE_MANAGES_ZK=true
> >
> > -----------------------------------------------------------
> > <configuration>
> >
> >  <property>
> >    <name>hbase.cluster.distributed</name>
> >    <value>true</value>
> >    <description>true:fully-distributed with unmanaged Zookeeper Quorum
> >    </description>
> >  </property>
> >
> >   <property>
> >    <name>hbase.rootdir</name>
> >    <value>hdfs://convera:9000/hbase</value>
> >    <description>The directory shared by region servers.
> >    Should be fully-qualified to include the filesystem to use.
> >    E.g: hdfs://NAMENODE_SERVER:PORT/HBASE_ROOTDIR
> >    </description>
> >  </property>
> >
> >  <property>
> >    <name>hbase.master</name>
> >    <value>10.42.253.182:60000</value>
> >    <description>The host and port that the HBase master runs at.
> >    A value of 'local' runs the master and a regionserver in
> >    a single process.
> >    </description>
> >  </property>
> >  <property>
> >    <name>hbase.zookeeper.quorum</name>
> >    <value>convera</value>
> >     <description>Comma separated list of servers in the ZooKeeper Quorum.
> >    For example,
> > "host1.mydomain.com,host2.mydomain.com,host3.mydomain.com".
> >    By default this is set to localhost for local and pseudo-distributed
> > modes
> >    of operation. For a fully-distributed setup, this should be set to a
> > full
> >    list of ZooKeeper quorum servers. If HBASE_MANAGES_ZK is set in
> > hbase-env.sh
> >    this is the list of servers which we will start/stop ZooKeeper on.
> >    </description>
> >  </property>
> >
> > <property>
> >     <name>hbase.zookeeper.property.maxClientCnxns</name>
> >    <value>30</value>
> >    <description>Property from ZooKeeper's config zoo.cfg.
> >    Limit on number of concurrent connections (at the socket level) that a
> >    single client, identified by IP address, may make to a single member
> of
> >    the ZooKeeper ensemble. Set high to avoid zk connection issues running
> >    standalone and pseudo-distributed.
> >    </description>
> >  </property>
> >
> > </configuration>
> >
> >
> >
> >
> >
> >
> >
> >                      Amandeep Khurana
> >                      <amansk@gmail.com        To:
> > hbase-user@hadoop.apache.org
> >                      >                        cc:      (bcc:
> Y_823910/TSMC)
> >                                               Subject: Re: HBase-0.20.0
> > multi read
> >                       2009/08/21 11:54
> >                       AM
> >                      Please respond to
> >                      hbase-user
> >
> >
> >
> >
> >
> >
> > You ideally want to have 3-5 servers outside the hbase servers... 1
> > server is not enough. That could to be causing you the trouble.
> >
> > Post logs from the master and the region server where the read failed.
> >
> > Also, what's your configuration? How many nodes, ram, cpus etc?
> >
> > On 8/20/09, y_823910@tsmc.com <y_823910@tsmc.com> wrote:
> > >
> > > Hi there,
> > >
> > > It worked well while I fired 5 threads to fetch data from HBASE,but
> > > it failed after I incresed to 6 threads.
> > > Although it showed some WARN, the thread job can't be done!
> > > My hbase is the latest version hbase0.20.
> > > I want to test HBase multi read performance.
> > > Any suggestion?
> > > Thank you
> > >
> > > Fleming
> > >
> > >
> > > hbase-env.sh
> > >    export HBASE_MANAGES_ZK=true
> > >
> > > 09/08/21 09:54:07 WARN zookeeper.ZooKeeperWrapper: Failed to create
> > /hbase
> > > -- check quorum servers, currently=10.42.253.182:2181
> > > org.apache.zookeeper.KeeperException$ConnectionLossException:
> > > KeeperErrorCode = ConnectionLoss for /hbase
> > >       at
> > > org.apache.zookeeper.KeeperException.create(KeeperException.java:90)
> > >       at
> > > org.apache.zookeeper.KeeperException.create(KeeperException.java:42)
> > >       at org.apache.zookeeper.ZooKeeper.create(ZooKeeper.java:522)
> > >       at
> > >
> >
> >
>
> org.apache.hadoop.hbase.zookeeper.ZooKeeperWrapper.ensureExists(ZooKeeperWrapper.java:342)
>
> >
> > >       at
> > >
> >
> >
>
> org.apache.hadoop.hbase.zookeeper.ZooKeeperWrapper.ensureParentExists(ZooKeeperWrapper.java:365)
>
> >
> > >       at
> > >
> >
> >
>
> org.apache.hadoop.hbase.zookeeper.ZooKeeperWrapper.checkOutOfSafeMode(ZooKeeperWrapper.java:478)
>
> >
> > >       at
> > >
> >
> >
>
> org.apache.hadoop.hbase.client.HConnectionManager$TableServers.locateRootRegion(HConnectionManager.java:846)
>
> >
> > >       at
> > >
> >
> >
>
> org.apache.hadoop.hbase.client.HConnectionManager$TableServers.locateRegion(HConnectionManager.java:515)
>
> >
> > >       at
> > >
> >
> >
>
> org.apache.hadoop.hbase.client.HConnectionManager$TableServers.locateRegion(HConnectionManager.java:491)
>
> >
> > >       at
> > >
> >
> >
>
> org.apache.hadoop.hbase.client.HConnectionManager$TableServers.locateRegionInMeta(HConnectionManager.java:565)
>
> >
> > >       at
> > >
> >
> >
>
> org.apache.hadoop.hbase.client.HConnectionManager$TableServers.locateRegion(HConnectionManager.java:524)
>
> >
> > >       at
> > >
> >
> >
>
> org.apache.hadoop.hbase.client.HConnectionManager$TableServers.locateRegion(HConnectionManager.java:491)
>
> >
> > >       at
> > >
> >
> >
>
> org.apache.hadoop.hbase.client.HConnectionManager$TableServers.locateRegionInMeta(HConnectionManager.java:565)
>
> >
> > >       at
> > >
> >
> >
>
> org.apache.hadoop.hbase.client.HConnectionManager$TableServers.locateRegion(HConnectionManager.java:528)
>
> >
> > >       at
> > >
> >
> >
>
> org.apache.hadoop.hbase.client.HConnectionManager$TableServers.locateRegion(HConnectionManager.java:491)
>
> >
> > >       at org.apache.hadoop.hbase.client.HTable.<init>(HTable.java:123)
> > >       at org.apache.hadoop.hbase.client.HTable.<init>(HTable.java:101)
> > >       at
> > > org.gridgain.examples.executor.FlowJob.getHBaseData(FlowJob.java:144)
> > >       at org.gridgain.examples.executor.FlowJob.call(FlowJob.java:78)
> > >       at org.gridgain.examples.executor.FlowJob.call(FlowJob.java:1)
> > >       at
> > >
> >
> >
>
> org.gridgain.grid.kernal.executor.GridExecutorCallableTask$1.execute(GridExecutorCallableTask.java:57)
>
> >
> > >       at
> > >
> >
> >
>
> org.gridgain.grid.kernal.processors.job.GridJobWorker.body(GridJobWorker.java:406)
>
> >
> > >       at
> > >
> org.gridgain.grid.util.runnable.GridRunnable$1.run(GridRunnable.java:142)
> > >       at
> > > java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:441)
> > >       at
> > java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
> > >       at java.util.concurrent.FutureTask.run(FutureTask.java:138)
> > >       at
> > > org.gridgain.grid.util.runnable.GridRunnable.run(GridRunnable.java:194)
> > >       at
> > >
> >
> >
>
> java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
>
> >
> > >       at
> > >
> >
> >
>
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
>
> >
> > >       at java.lang.Thread.run(Thread.java:619)
> > >
> >
> ---------------------------------------------------------------------------
> > >                                                          TSMC PROPERTY
> > >  This email communication (and any attachments) is proprietary
> > information
> > >  for the sole use of its
> > >  intended recipient. Any unauthorized review, use or distribution by
> > anyone
> > >  other than the intended
> > >  recipient is strictly prohibited.  If you are not the intended
> > recipient,
> > >  please notify the sender by
> > >  replying to this email, and then delete this email and any copies of
> it
> > >  immediately. Thank you.
> > >
> >
> ---------------------------------------------------------------------------
> > >
> > >
> > >
> > >
> >
> >
> > --
> >
> >
> > Amandeep Khurana
> > Computer Science Graduate Student
> > University of California, Santa Cruz
> >
> >
> >
> >
> >
> >
> ---------------------------------------------------------------------------
> >                                                          TSMC PROPERTY
> >  This email communication (and any attachments) is proprietary
> information
> >  for the sole use of its
> >  intended recipient. Any unauthorized review, use or distribution by
> anyone
> >  other than the intended
> >  recipient is strictly prohibited.  If you are not the intended
> recipient,
> >  please notify the sender by
> >  replying to this email, and then delete this email and any copies of it
> >  immediately. Thank you.
> >
> >
> ---------------------------------------------------------------------------
> >
> >
> >
> >
>
>
>
>
>
>  ---------------------------------------------------------------------------
>                                                          TSMC PROPERTY
>  This email communication (and any attachments) is proprietary information
>  for the sole use of its
>  intended recipient. Any unauthorized review, use or distribution by anyone
>  other than the intended
>  recipient is strictly prohibited.  If you are not the intended recipient,
>  please notify the sender by
>  replying to this email, and then delete this email and any copies of it
>  immediately. Thank you.
>
>  ---------------------------------------------------------------------------
>
>
>
>

Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message