hbase-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Ryan Rawson <ryano...@gmail.com>
Subject Re: Are there any single points of failure in a HBase configuration
Date Tue, 16 Jun 2009 21:44:55 GMT
Any replication support in 0.21 won't be even close to 2PC, since it's more
or less a nightmare.

But your approach sounds good in the mean time.

Good luck!
-ryan

On Tue, Jun 16, 2009 at 2:35 PM, Fred Zappert <fzappert@gmail.com> wrote:

> Ryan,
>
> Thanks for the information.
>
> In terms of replication support, I had already recommended replication by
> having the transactions processed at both data centers by a message queue
> (top-level-replication) that did not require data-base level replication.
>
> I've seen to many limitations on those over the years in the RDMS field to
> want to deal with that again.
>
> This application is amenable to that approach since most of it involves
> data
> collection (inserts and simple updates) with no requirement for two-phase
> commits that we've discovered yet.
>
> Regards,
>
> Fred.
>
> On Tue, Jun 16, 2009 at 4:27 PM, Ryan Rawson <ryanobjc@gmail.com> wrote:
>
> > i havent heard of it, but it would be nice :-)
> >
> >
> >
> > On Tue, Jun 16, 2009 at 2:21 PM, Ski Gh3 <skigh3@gmail.com> wrote:
> >
> > > I thought HDFS will fix the namenode as a SPOF just as HBase fix the
> > master
> > > in 0.20, so that is still not there yet?
> > >
> > > On Tue, Jun 16, 2009 at 2:04 PM, Ryan Rawson <ryanobjc@gmail.com>
> wrote:
> > >
> > > > HBase itself doesn't strictly have any SPOF in 0.20.  Multiple master
> > > > failover, etc.
> > > >
> > > > HBase depends on HDFS, which does have a SPOF in it's namenode.  If
> > that
> > > > goes down, everything is down.  Generally speaking, namenode is
> > reliable,
> > > > but the hardware is the issue.  You can have a quick recovery, but
> > still
> > > > outage.
> > > >
> > > >
> > > > HBase isn't explicitly designed to run across a WAN split between 2
> > > > datacenters.  It's certainly possible, but during certain link-down
> > > > scenarios you are looking at cluster splits.  HBase master will
> decide
> > > > those
> > > > regionservers no longer reachable have died, and HDFS will assume
> those
> > > > datanodes lost are gone, and start to replicate data.
> > > >
> > > > In HBase 0.21, we are hoping to have replication support between
> > > clusters.
> > > >
> > > > -ryan
> > > >
> > > >
> > > > On Tue, Jun 16, 2009 at 1:53 PM, Fred Zappert <fzappert@gmail.com>
> > > wrote:
> > > >
> > > > > Hi,
> > > > >
> > > > > We're considering HBase for a customer-facing SaaS.  I saw some
> > > > references
> > > > > to Master instance, and failure/failover scenarios on this list.
> > > > >
> > > > > We would be running this across at least two data centers in
> > different
> > > > > cities or sates.
> > > > >
> > > > > Which leads to the following questions:
> > > > >
> > > > > 1. Are there any single points of failure in an HBase
> configuration.
> > > > >
> > > > > 2: What would be the impact of one data center being down?
> > > > >
> > > > > 3: What would be the recovery time and procedure to restore normal
> > > > > operation
> > > > > on a new master?
> > > > >
> > > > > There are approximately 4M transactions/day
> > > > >
> > > > > Thanks,
> > > > >
> > > > > Fred.
> > > > >
> > > >
> > >
> >
>

Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message