hbase-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Yabo Xu <arber.resea...@gmail.com>
Subject Re: Could not obtain block error
Date Tue, 13 Jul 2010 09:13:38 GMT
OK, thanks. The thing that delays us on upgrading is the API change. We have
a bunch of old applications that sits on 0.19.1 API.

Best,
Arber


On Tue, Jul 13, 2010 at 11:17 AM, Jean-Daniel Cryans <jdcryans@apache.org>wrote:

> Variety of reasons, without evidences (master's log for example) I
> can't tell exactly. Also, since 0.19 does contain a lot less
> reliability fixes than 0.20, especially 0.20.5
>
> As a comparison, our 20 nodes production cluster is serving real-time
> data 24/7 without that kind of issue. We're running on the latest cdh2
> and HBase 0.20 + a couple of home-brewed patches that serves our own
> particular usage of HBase.
>
> J-D
>
> On Mon, Jul 12, 2010 at 6:46 PM, Yabo Xu <arber.research@gmail.com> wrote:
> > Thanks, J-D.
> >
> > This morning I found the data block was automatically deleted. But that
> > block was indeed there. And because there was not much traffic on the
> test
> > cluster, so it seems more possibility goes to the double assignment issue
> > you mentioned.
> >
> > Just curious how does that occur? We may not want to restart every time
> to
> > address this issue.
> >
> > Thanks again.
> >
> > Best,
> > Arber
> >
> >
> > On Tue, Jul 13, 2010 at 12:21 AM, Jean-Daniel Cryans <
> jdcryans@apache.org>wrote:
> >
> >> > file=/hbase/-ROOT-/70236052/info/mapfiles/3687060941742211902/data
> >>
> >> Can you get the data of that file in HDFS? If so, then it could be an
> >> xciever problem
> >> (http://wiki.apache.org/hadoop/Hbase/Troubleshooting#A5). If not, then
> >> there could be a double assignment issue and restarting the cluster
> >> would take care of it (since it's only a test env).
> >>
> >> Seeing that you aren't using a 0.20 release (since we stopped using
> >> mapfiles in 0.20), I can only recommend upgrading to 0.20.5
> >>
> >> J-D
> >>
> >> On Mon, Jul 12, 2010 at 2:36 AM, Yabo Xu <arber.research@gmail.com>
> wrote:
> >> > Hi there:
> >> >
> >> > On an internal testing cluster with 3 nodes, when I run "flush '.META'
> "
> >> on
> >> > the hbase shell, it gets the following "Can not obtain block" error. I
> >> > checked around, and many posts say that it might be due to the crash
> of
> >> some
> >> > datanodes. But in my case, i checked the UI, all nodes appears to be
> >> fine.
> >> > Any other possibilities?
> >> >
> >> > Error details pasted below. Any help is appreciated!
> >> >
> >> > Best,
> >> > Arber
> >> >
> >> > hbase(main):001:0> flush '.META'
> >> > 10/07/12 17:29:30 WARN client.HConnectionManager$TableServers: Testing
> >> for
> >> > table existence threw exception
> >> > org.apache.hadoop.hbase.client.RetriesExhaustedException: Trying to
> >> contact
> >> > region server null for region , row '', but failed after 5 attempts.
> >> > Exceptions:
> >> > java.io.IOException: java.io.IOException: Could not obtain block:
> >> > blk_-80326634570231114_202750
> >> > file=/hbase/-ROOT-/70236052/info/mapfiles/3687060941742211902/data
> >> >    at
> >> >
> >>
> org.apache.hadoop.hdfs.DFSClient$DFSInputStream.chooseDataNode(DFSClient.java:1707)
> >> >    at
> >> >
> >>
> org.apache.hadoop.hdfs.DFSClient$DFSInputStream.blockSeekTo(DFSClient.java:1535)
> >> >    at
> >> >
> org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:1662)
> >> >    at
> >> >
> org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:1592)
> >> >    at java.io.DataInputStream.readInt(DataInputStream.java:370)
> >> >    at
> >> >
> >>
> org.apache.hadoop.hbase.io.SequenceFile$Reader.readRecordLength(SequenceFile.java:1909)
> >> >    at
> >> >
> >>
> org.apache.hadoop.hbase.io.SequenceFile$Reader.next(SequenceFile.java:1939)
> >> >    at
> >> >
> >>
> org.apache.hadoop.hbase.io.SequenceFile$Reader.next(SequenceFile.java:1844)
> >> >    at
> >> >
> >>
> org.apache.hadoop.hbase.io.SequenceFile$Reader.next(SequenceFile.java:1890)
> >> >    at org.apache.hadoop.hbase.io.MapFile$Reader.next(MapFile.java:544)
> >> >    at
> >> >
> >>
> org.apache.hadoop.hbase.regionserver.HStore.rowAtOrBeforeFromMapFile(HStore.java:1723)
> >> >    at
> >> >
> >>
> org.apache.hadoop.hbase.regionserver.HStore.getRowKeyAtOrBefore(HStore.java:1695)
> >> >    at
> >> >
> >>
> org.apache.hadoop.hbase.regionserver.HRegion.getClosestRowBefore(HRegion.java:1089)
> >> >    at
> >> >
> >>
> org.apache.hadoop.hbase.regionserver.HRegionServer.getClosestRowBefore(HRegionServer.java:1555)
> >> >    at sun.reflect.GeneratedMethodAccessor6.invoke(Unknown Source)
> >> >    at
> >> >
> >>
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> >> >    at java.lang.reflect.Method.invoke(Method.java:597)
> >> >    at
> org.apache.hadoop.hbase.ipc.HBaseRPC$Server.call(HBaseRPC.java:632)
> >> >    at
> >> >
> org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:912)
> >> >
> >>
> >
>

Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message