hbase-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From stack <st...@duboce.net>
Subject Re: Can't obtainRowLock because Region is closef
Date Fri, 26 Dec 2008 19:40:06 GMT
Can you find this region, 
'test2,a521DfAPKkUbWqIOHc8pAQ==,1230151003797', deployed anywhere on 
your cluster?  If you scan your '.META.' table -- i.e., in shell type 
'scan ".META."' -- can you see what server master thinks it should be 
hosted by?  (See the info:server field).  If you go to that 
regionservers' UI, does it say its hosting this table?  If not, master 
and regionserver are in disagreement (This seems to be the case here 
because I could not find the region in the master log snippet you 
attached).  Try restarting the regionserver.  As to how your master and 
regionserver fell out of alignment, see if you can figure what event 
happened that made them deviate by grepping master and regionserver logs 
with the problematic region name.

St.Ack


Yossi Ittach wrote:
> Hi All
>
> After inserting a a couple of million files , I get these Errors and I can't
> insert anymore files. The Master seems to be OK(attached) and so does sthe
> RegionServer (also attached , but not interesting)
>
>  Has anybody encountered something like this?
>
> *(Console)*
> org.apache.hadoop.hbase.NotServingRegionException:
> org.apache.hadoop.hbase.NotServingRegionException: Region
> test2,a521DfAPKkUbWqIOHc8pAQ==,1230151003797 closed
>         at
> org.apache.hadoop.hbase.regionserver.HRegion.obtainRowLock(HRegion.java:1810)
>         at
> org.apache.hadoop.hbase.regionserver.HRegion.getLock(HRegion.java:1875)
>         at
> org.apache.hadoop.hbase.regionserver.HRegion.batchUpdate(HRegion.java:1406)
>         at
> org.apache.hadoop.hbase.regionserver.HRegion.batchUpdate(HRegion.java:1380)
>         at
> org.apache.hadoop.hbase.regionserver.HRegionServer.batchUpdate(HRegionServer.java:1114)
>         at sun.reflect.GeneratedMethodAccessor5.invoke(Unknown Source)
>         at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>         at java.lang.reflect.Method.invoke(Method.java:585)
>         at
> org.apache.hadoop.hbase.ipc.HbaseRPC$Server.call(HbaseRPC.java:554)
>         at org.apache.hadoop.ipc.Server$Handler.run(Server.java:888)
>
> org.apache.hadoop.hbase.NotServingRegionException:
> org.apache.hadoop.hbase.NotServingRegionException: Region
> test2,a521DfAPKkUbWqIOHc8pAQ==,1230151003797 closed
>         at
> org.apache.hadoop.hbase.regionserver.HRegion.obtainRowLock(HRegion.java:1810)
>         at
> org.apache.hadoop.hbase.regionserver.HRegion.getLock(HRegion.java:1875)
>         at
> org.apache.hadoop.hbase.regionserver.HRegion.batchUpdate(HRegion.java:1406)
>         at
> org.apache.hadoop.hbase.regionserver.HRegion.batchUpdate(HRegion.java:1380)
>         at
> org.apache.hadoop.hbase.regionserver.HRegionServer.batchUpdate(HRegionServer.java:1114)
>         at sun.reflect.GeneratedMethodAccessor5.invoke(Unknown Source)
>         at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>         at java.lang.reflect.Method.invoke(Method.java:585)
>         at
> org.apache.hadoop.hbase.ipc.HbaseRPC$Server.call(HbaseRPC.java:554)
>         at org.apache.hadoop.ipc.Server$Handler.run(Server.java:888)
>
>
>         at
> org.apache.hadoop.hbase.client.HConnectionManager$TableServers.getRegionServerWithRetries(HConnectionManager.java:863)
>         at org.apache.hadoop.hbase.client.HTable.commit(HTable.java:964)
>         at org.apache.hadoop.hbase.client.HTable.commit(HTable.java:950)
>         at
> com.outbrain.globals.io.filesystem.HBaseFeedEntries.saveToMechanisem(HBaseFeedEntries.java:137)
>         at
> com.outbrain.globals.io.filesystem.HBaseFeedEntries.saveTo(HBaseFeedEntries.java:108)
>         at
> com.outbrain.BatchFeedInserter.BatchFeedInserter$DocFeeder.call(BatchFeedInserter.java:98)
>         at
> com.outbrain.BatchFeedInserter.BatchFeedInserter$DocFeeder.call(BatchFeedInserter.java:1)
>         at
> java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:269)
>         at java.util.concurrent.FutureTask.run(FutureTask.java:123)
>         at
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:417)
>         at
> java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:269)
>         at java.util.concurrent.FutureTask.run(FutureTask.java:123)
>         at
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:65)
>         at
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:168)
>         at
> java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:650)
>         at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:675)
>         at java.lang.Thread.run(Thread.java:595)
>
> *(Master*)
> 2008-12-24 17:28:13,756 DEBUG org.apache.hadoop.hbase.master.BaseScanner:
> RegionManager.metaScanner REGION => {NAME =>
> 'test2,yyPOB0kIPFX7lx9pkw7Hkw==,1230147405336', STARTKEY =>
> 'yyPOB0kIPFX7lx9pkw7Hkw==', ENDKEY => '', ENCODED => 1385803844, TABLE =>
> {{NAME => 'test2', IS_ROOT => 'false', IS_META => 'false', COMPRESSION =>
> 'RECORD', FAMILIES => [{NAME => 'obde_content', BLOOMFILTER => 'false',
> IN_MEMORY => 'false', VERSIONS => '3', BLOCKCACHE => 'false', LENGTH =>
> '2147483647', TTL => '-1', COMPRESSION => 'NONE'}]}}}, SERVER => '
> 192.168.252.213:60020', STARTCODE => 1230135899203
> 2008-12-24 17:28:13,757 INFO org.apache.hadoop.hbase.master.BaseScanner:
> RegionManager.metaScanner scan of meta region {regionname: .META.,,1,
> startKey: <>, server: 192.168.252.213:60020} complete
> 2008-12-24 17:28:13,757 INFO org.apache.hadoop.hbase.master.BaseScanner: all
> meta regions scanned
> 2008-12-24 17:28:14,430 DEBUG org.apache.hadoop.hbase.master.ServerManager:
> Total Load: 66, Num Servers: 1, Avg Load: 66.0
> 2008-12-24 17:28:29,448 DEBUG org.apache.hadoop.hbase.master.ServerManager:
> Total Load: 66, Num Servers: 1, Avg Load: 66.0
> 2008-12-24 17:28:38,355 DEBUG
> org.apache.hadoop.hbase.client.HConnectionManager$TableServers: Cache hit in
> table locations for row <> and tableName .META.: location server
> 192.168.252.213:60020, location region name .META.,,1
> 2008-12-24 17:28:44,465 DEBUG org.apache.hadoop.hbase.master.ServerManager:
> Total Load: 66, Num Servers: 1, Avg Load: 66.0
> 2008-12-24 17:28:59,483 DEBUG org.apache.hadoop.hbase.master.ServerManager:
> Total Load: 66, Num Servers: 1, Avg Load: 66.0
> 2008-12-24 17:29:07,706 INFO org.apache.hadoop.hbase.master.BaseScanner:
> RegionManager.rootScanner scanning meta region {regionname: -ROOT-,,0,
> startKey: <>, server: 192.168.252.213:60020}
> 2008-12-24 17:29:07,722 DEBUG org.apache.hadoop.hbase.master.BaseScanner:
> RegionManager.rootScanner REGION => {NAME => '.META.,,1', STARTKEY => '',
> ENDKEY => '', ENCODED => 1028785192, TABLE => {{NAME => '.META.', IS_ROOT
=>
> 'false', IS_META => 'true', FAMILIES => [{NAME => 'historian', BLOOMFILTER
> => 'false', IN_MEMORY => 'false', VERSIONS => '2147483647', BLOCKCACHE =>
> 'false', LENGTH => '2147483647', TTL => '-1', COMPRESSION => 'NONE'}, {NAME
> => 'info', BLOOMFILTER => 'false', IN_MEMORY => 'false', VERSIONS => '1',
> BLOCKCACHE => 'false', LENGTH => '2147483647', TTL => '-1', COMPRESSION =>
> 'NONE'}]}}}, SERVER => '192.168.252.213:60020', STARTCODE => 1230135899203
> 2008-12-24 17:29:07,723 INFO org.apache.hadoop.hbase.master.BaseScanner:
> RegionManager.rootScanner scan of meta region {regionname: -ROOT-,,0,
> startKey: <>, server: 192.168.252.213:60020} complete
> 2008-12-24 17:29:09,215 DEBUG
> org.apache.hadoop.hbase.client.HConnectionManager$TableServers: Cache hit in
> table locations for row <> and tableName .META.: location server
> 192.168.252.213:60020, location region name .META.,,1
> 2008-12-24 17:29:13,658 INFO org.apache.hadoop.hbase.master.BaseScanner:
> RegionManager.metaScanner scanning meta region {regionname: .META.,,1,
> startKey: <>, server: 192.168.252.213:60020}
> 2008-12-24 17:29:13,685 DEBUG org.apache.hadoop.hbase.master.BaseScanner:
> RegionManager.metaScanner REGION => {NAME => 'test2,,1230150960296',
> STARTKEY => '', ENDKEY => '+zNfXoK2KxY3/ZVR5Ko4Tw==', ENCODED => 934049166,
> TABLE => {{NAME => 'test2', IS_ROOT => 'false', IS_META => 'false',
> COMPRESSION => 'RECORD', FAMILIES => [{NAME => 'obde_content', BLOOMFILTER
> => 'false', IN_MEMORY => 'false', VERSIONS => '3', BLOCKCACHE => 'false',
> LENGTH => '2147483647', TTL => '-1', COMPRESSION => 'NONE'}]}}}, SERVER =>
'
> 192.168.252.213:60020', STARTCODE => 1230135899203
> 2008-12-24 17:29:13,686 DEBUG org.apache.hadoop.hbase.master.BaseScanner:
> RegionManager.metaScanner REGION => {NAME =>
> 'test2,+zNfXoK2KxY3/ZVR5Ko4Tw==,1230150960296', STARTKEY =>
> '+zNfXoK2KxY3/ZVR5Ko4Tw==', ENDKEY => '/z7MIyWkSwKeEUpzP1nr/w==', ENCODED =>
> 966894266, TABLE => {{NAME => 'test2', IS_ROOT => 'false', IS_META =>
> 'false', COMPRESSION => 'RECORD', FAMILIES => [{NAME => 'obde_content',
> BLOOMFILTER => 'false', IN_MEMORY => 'false', VERSIONS => '3', BLOCKCACHE =>
> 'false', LENGTH => '2147483647', TTL => '-1', COMPRESSION => 'NONE'}]}}},
> SERVER => '192.168.252.213:60020', STARTCODE => 1230135899203
> 2008-12-24 17:29:13,687 DEBUG org.apache.hadoop.hbase.master.BaseScanner:
> RegionManager.metaScanner REGION => {NAME =>
> 'test2,/z7MIyWkSwKeEUpzP1nr/w==,1230150955382', STARTKEY =>
> '/z7MIyWkSwKeEUpzP1nr/w==', ENDKEY => '0zI93xH77rV7n0ELh7/abw==', ENCODED =>
> 1597194225, TABLE => {{NAME => 'test2', IS_ROOT => 'false', IS_META =>
> 'false', COMPRESSION => 'RECORD', FAMILIES => [{NAME => 'obde_content',
> BLOOMFILTER => 'false', IN_MEMORY => 'false', VERSIONS => '3', BLOCKCACHE =>
> 'false', LENGTH => '2147483647', TTL => '-1', COMPRESSION => 'NONE'}]}}},
> SERVER => '192.168.252.213:60020', STARTCODE => 1230135899203
> 2008-12-24 17:29:13,688 DEBUG org.apache.hadoop.hbase.master.BaseScanner:
> RegionManager.metaScanner REGION => {NAME =>
> 'test2,0zI93xH77rV7n0ELh7/abw==,1230150955382', STARTKEY =>
> '0zI93xH77rV7n0ELh7/abw==', ENDKEY => '1yfvsSuQanvqzPLiozVHcw==', ENCODED =>
> 1323416154, TABLE => {{NAME => 'test2', IS_ROOT => 'false', IS_META =>
> 'false', COMPRESSION => 'RECORD', FAMILIES => [{NAME => 'obde_content',
> BLOOMFILTER => 'false', IN_MEMORY => 'false', VERSIONS => '3', BLOCKCACHE
>
> *RegionServer*
> ck:9558528 lastPacketInBlock:false
> 2008-12-24 17:28:45,255 DEBUG org.apache.hadoop.dfs.DFSClient: DFSClient
> writeChunk allocating new packet 148
> 2008-12-24 17:28:45,257 DEBUG org.apache.hadoop.dfs.DFSClient: DFSClient
> received ack for seqno 147
> 2008-12-24 17:28:46,563 DEBUG org.apache.hadoop.dfs.DFSClient: DFSClient
> writeChunk packet full seqno 148
> 2008-12-24 17:28:46,563 DEBUG org.apache.hadoop.dfs.DFSClient: DataStreamer
> block blk_5412428864980798884_22695 wrote packet seqno:148 size:65557
> offsetInBlock:9623552 lastPacketInBlock:false
> 2008-12-24 17:28:46,563 DEBUG org.apache.hadoop.dfs.DFSClient: DFSClient
> writeChunk allocating new packet 149
> 2008-12-24 17:28:46,566 DEBUG org.apache.hadoop.dfs.DFSClient: DFSClient
> received ack for seqno 148
> 2008-12-24 17:28:47,439 DEBUG org.apache.hadoop.dfs.DFSClient: DFSClient
> writeChunk packet full seqno 149
> 2008-12-24 17:28:47,439 DEBUG org.apache.hadoop.dfs.DFSClient: DataStreamer
> block blk_5412428864980798884_22695 wrote packet seqno:149 size:65557
> offsetInBlock:9688576 lastPacketInBlock:false
> 2008-12-24 17:28:47,439 DEBUG org.apache.hadoop.dfs.DFSClient: DFSClient
> writeChunk allocating new packet 150
> 2008-12-24 17:28:47,442 DEBUG org.apache.hadoop.dfs.DFSClient: DFSClient
> received ack for seqno 149
> 2008-12-24 17:28:48,831 DEBUG org.apache.hadoop.dfs.DFSClient: DFSClient
> writeChunk packet full seqno 150
> 2008-12-24 17:28:48,832 DEBUG org.apache.hadoop.dfs.DFSClient: DataStreamer
> block blk_5412428864980798884_22695 wrote packet seqno:150 size:65557
> offsetInBlock:9753600 lastPacketInBlock:false
> 2008-12-24 17:28:48,832 DEBUG org.apache.hadoop.dfs.DFSClient: DFSClient
> writeChunk allocating new packet 151
> 2008-12-24 17:28:48,835 DEBUG org.apache.hadoop.dfs.DFSClient: DFSClient
> received ack for seqno 150
> 2008-12-24 17:28:50,109 DEBUG org.apache.hadoop.dfs.DFSClient: DFSClient
> writeChunk packet full seqno 151
> 2008-12-24 17:28:50,109 DEBUG org.apache.hadoop.dfs.DFSClient: DataStreamer
> block blk_5412428864980798884_22695 wrote packet seqno:151 size:65557
> offsetInBlock:9818624 lastPacketInBlock:false
> 2008-12-24 17:28:50,109 DEBUG org.apache.hadoop.dfs.DFSClient: DFSClient
> writeChunk allocating new packet 152
> 2008-12-24 17:28:50,112 DEBUG org.apache.hadoop.dfs.DFSClient: DFSClient
> received ack for seqno 151
> 2008-12-24 17:28:50,942 DEBUG org.apache.hadoop.dfs.DFSClient: DFSClient
> writeChunk packet full seqno 152
> 2008-12-24 17:28:50,942 DEBUG org.apache.hadoop.dfs.DFSClient: DFSClient
> writeChunk allocating new packet 153
> 2008-12-24 17:28:50,942 DEBUG org.apache.hadoop.dfs.DFSClient: DataStreamer
> block blk_5412428864980798884_22695 wrote packet seqno:152 size:65557
> offsetInBlock:9883648 lastPacketInBlock:false
> 2008-12-24 17:28:50,944 DEBUG org.apache.hadoop.dfs.DFSClient: DFSClient
> received ack for seqno 152
> 2008-12-24 17:28:51,858 DEBUG org.apache.hadoop.dfs.DFSClient: DFSClient
> writeChunk packet full seqno 153
> 2008-12-24 17:28:51,858 DEBUG org.apache.hadoop.dfs.DFSClient: DataStreamer
> block blk_5412428864980798884_22695 wrote packet seqno:153 size:65557
> offsetInBlock:9948672 lastPacketInBlock:false
> 2008-12-24 17:28:51,858 DEBUG org.apache.hadoop.dfs.DFSClient: DFSClient
> writeChunk allocating new packet 154
> 2008-12-24 17:28:51,861 DEBUG org.apache.hadoop.dfs.DFSClient: DFSClient
> received ack for seqno 153
> 2008-12-24 17:28:54,004 DEBUG org.apache.hadoop.dfs.DFSClient: DFSClient
> writeChunk packet full seqno 154
> 2008-12-24 17:28:54,004 DEBUG org.apache.hadoop.dfs.DFSClient: DataStreamer
> block blk_5412428864980798884_22695 wrote packet seqno:154 size:65557
> offsetInBlock:10013696 lastPacketInBlock:false
> 2008-12-24 17:28:54,004 DEBUG org.apache.hadoop.dfs.DFSClient: DFSClient
> writeChunk allocating new packet 155
> 2008-12-24 17:28:54,007 DEBUG org.apache.hadoop.dfs.DFSClient: DFSClient
> received ack for seqno 154
> 2008-12-24 17:28:54,709 DEBUG org.apache.hadoop.dfs.DFSClient: DFSClient
> writeChunk packet full seqno 155
> 2008-12-24 17:28:54,709 DEBUG org.apache.hadoop.dfs.DFSClient: DataStreamer
> block blk_5412428864980798884_22695 wrote packet seqno:155 size:65557
> offsetInBlock:10078720 lastPacketInBlock:false
>
>
>
> Vale et me ama
> Yossi
>
>   


Mime
View raw message