My HBase version is too old.
I use 0.96.2.
I have a plan to upgrade HBase 1.2.4 in this year.
________________________________
º¸³½ »ç¶÷: Ted Yu <yuzhihong@gmail.com>
º¸³½ ³¯Â¥: 2017³â 3¿ù 20ÀÏ ¿ù¿äÀÏ ¿ÀÈÄ 12:28:45
¹Þ´Â »ç¶÷: user@hbase.apache.org
Á¦¸ñ: Re: Why IOException occur when region server is closing (CloseRegionHandler.java#L110)?
Which release are you using ?
Maybe related: HBASE-13592
On Sun, Mar 19, 2017 at 8:22 PM, Kang Minwoo <minwoo.kang@outlook.com>
wrote:
> Yes, It happened in my cluster.
>
>
> [RegionServer LOG]
>
> 2017-03-20 11:02:21,466 WARN org.apache.hadoop.hbase.regionserver.wal.FSHLog:
> Couldn't find oldest seqNum for the region we are about to flush: []
>
> 2017-03-20 11:02:21,466 INFO org.apache.hadoop.hbase.regionserver.HRegion:
> Finished memstore flush of ~0/0, currentsize=/ for region . in 0ms,
> sequenceid=, compaction requested=false
>
> 2017-03-20 11:02:21,466 FATAL org.apache.hadoop.hbase.regionserver.HRegionServer:
> ABORTING region server : Unrecoverable exception while closing region ,
> still finishing close
>
> org.apache.hadoop.hbase.DroppedSnapshotException: Failed clearing memory
> after 6 attempts on region: .
>
> at org.apache.hadoop.hbase.regionserver.HRegion.doClose(
> HRegion.java:1108)
>
> at org.apache.hadoop.hbase.regionserver.HRegion.close(
> HRegion.java:1046)
>
> at org.apache.hadoop.hbase.regionserver.handler.
> CloseRegionHandler.process(CloseRegionHandler.java:147)
>
> at org.apache.hadoop.hbase.executor.EventHandler.run(
> EventHandler.java:128)
>
> at java.util.concurrent.ThreadPoolExecutor.runWorker(
> ThreadPoolExecutor.java:1145)
>
> at java.util.concurrent.ThreadPoolExecutor$Worker.run(
> ThreadPoolExecutor.java:615)
>
> at java.lang.Thread.run(Thread.java:745)
>
> 2017-03-20 11:02:21,467 FATAL org.apache.hadoop.hbase.regionserver.HRegionServer:
> RegionServer abort: loaded coprocessors are: []
>
> 2017-03-20 11:02:21,528 INFO org.apache.hadoop.hbase.regionserver.HRegionServer:
> STOPPED: Unrecoverable exception while closing region , still finishing
> close
>
> 2017-03-20 11:02:21,528 ERROR org.apache.hadoop.hbase.executor.EventHandler:
> Caught throwable while processing event M_RS_CLOSE_REGION
>
> java.lang.RuntimeException: org.apache.hadoop.hbase.DroppedSnapshotException:
> Failed clearing memory after 6 attempts on region: .
>
> at org.apache.hadoop.hbase.regionserver.handler.
> CloseRegionHandler.process(CloseRegionHandler.java:161)
>
> at org.apache.hadoop.hbase.executor.EventHandler.run(
> EventHandler.java:128)
>
> at java.util.concurrent.ThreadPoolExecutor.runWorker(
> ThreadPoolExecutor.java:1145)
>
> at java.util.concurrent.ThreadPoolExecutor$Worker.run(
> ThreadPoolExecutor.java:615)
>
> at java.lang.Thread.run(Thread.java:745)
>
> Caused by: org.apache.hadoop.hbase.DroppedSnapshotException: Failed
> clearing memory after 6 attempts on region: .
>
> at org.apache.hadoop.hbase.regionserver.HRegion.doClose(
> HRegion.java:1108)
>
> at org.apache.hadoop.hbase.regionserver.HRegion.close(
> HRegion.java:1046)
>
> at org.apache.hadoop.hbase.regionserver.handler.
> CloseRegionHandler.process(CloseRegionHandler.java:147)
>
> ... 4 more
>
> 2017-03-20 11:02:21,528 INFO org.apache.hadoop.ipc.RpcServer: Stopping
> server on
>
> 2017-03-20 11:02:21,531 INFO org.apache.hadoop.ipc.RpcServer:
> RpcServer.listener,port=: stopping
>
> 2017-03-20 11:02:21,531 INFO org.apache.hadoop.ipc.RpcServer:
> Priority.RpcServer.handler=0,port=: exiting
>
> 2017-03-20 11:02:21,531 INFO org.apache.hadoop.hbase.regionserver.SplitLogWorker:
> Sending interrupt to stop the worker thread
>
> 2017-03-20 11:02:21,531 INFO org.apache.hadoop.ipc.RpcServer:
> Priority.RpcServer.handler=1,port=: exiting
>
>
> ...
>
>
> 2017-03-20 11:02:30,556 INFO org.apache.zookeeper.ZooKeeper: Session:
> closed
>
> 2017-03-20 11:02:30,556 INFO org.apache.zookeeper.ClientCnxn: EventThread
> shut down
>
> 2017-03-20 11:02:30,556 INFO org.apache.hadoop.hbase.regionserver.HRegionServer:
> stopping server ; zookeeper connection closed.
>
> 2017-03-20 11:02:30,556 INFO org.apache.hadoop.hbase.regionserver.HRegionServer:
> exiting
>
> 2017-03-20 11:02:30,556 ERROR org.apache.hadoop.hbase.regionserver.HRegionServerCommandLine:
> Region server exiting
>
> java.lang.RuntimeException: HRegionServer Aborted
>
> at org.apache.hadoop.hbase.regionserver.HRegionServerCommandLine.
> start(HRegionServerCommandLine.java:66)
>
> at org.apache.hadoop.hbase.regionserver.HRegionServerCommandLine.run(
> HRegionServerCommandLine.java:85)
>
> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
>
> at org.apache.hadoop.hbase.util.ServerCommandLine.doMain(
> ServerCommandLine.java:126)
>
> at org.apache.hadoop.hbase.regionserver.HRegionServer.
> main(HRegionServer.java:2340)
>
> 2017-03-20 11:02:30,593 INFO org.apache.hadoop.hbase.regionserver.ShutdownHook:
> Shutdown hook starting; hbase.shutdown.hook=true; fsShutdownHook=org.apache.
> hadoop.fs.FileSystem$Cache$ClientFinalizer@56ddd32a
>
> 2017-03-20 11:02:30,593 INFO org.apache.hadoop.hbase.regionserver.HRegionServer:
> STOPPED: Shutdown hook
>
> 2017-03-20 11:02:30,593 INFO org.apache.hadoop.hbase.regionserver.ShutdownHook:
> Starting fs shutdown hook thread.
>
> 2017-03-20 11:02:30,593 ERROR org.apache.hadoop.hdfs.DFSClient: Failed to
> close file
>
> java.net.SocketTimeoutException: 20000 millis timeout while waiting for
> channel to be ready for read. ch : java.nio.channels.SocketChannel[connected
> local= remote=]
>
> at org.apache.hadoop.net.SocketIOWithTimeout.doIO(
> SocketIOWithTimeout.java:164)
>
> at org.apache.hadoop.net.SocketInputStream.read(
> SocketInputStream.java:161)
>
> at org.apache.hadoop.net.SocketInputStream.read(
> SocketInputStream.java:131)
>
> at org.apache.hadoop.net.SocketInputStream.read(
> SocketInputStream.java:118)
>
> at java.io.FilterInputStream.read(FilterInputStream.java:83)
>
> at java.io.FilterInputStream.read(FilterInputStream.java:83)
>
> at org.apache.hadoop.hdfs.protocolPB.PBHelper.
> vintPrefixed(PBHelper.java:1984)
>
> at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.
> transfer(DFSOutputStream.java:1064)
>
> at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.
> addDatanode2ExistingPipeline(DFSOutputStream.java:1031)
>
> at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.
> setupPipelineForAppendOrRecovery(DFSOutputStream.java:1175)
>
> at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.
> processDatanodeError(DFSOutputStream.java:924)
>
> at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.
> run(DFSOutputStream.java:486)
>
> 2017-03-20 11:02:30,594 INFO org.apache.hadoop.hbase.regionserver.ShutdownHook:
> Shutdown hook finished.
>
>
> [HMaster LOG]
>
> 2017-03-20 11:02:21,525 ERROR org.apache.hadoop.hbase.master.HMaster:
> Region server reported a fatal error:
>
> ABORTING region server : Unrecoverable exception while closing region ,
> still finishing close
>
> 2017-03-20 11:02:22,261 INFO org.apache.hadoop.hbase.master.RegionStates:
> Offlined from
>
> 2017-03-20 11:02:22,723 INFO org.apache.hadoop.hbase.master.RegionStates:
> Offlined from
>
> 2017-03-20 11:02:30,535 INFO org.apache.hadoop.hbase.zookeeper.RegionServerTracker:
> RegionServer ephemeral node deleted, processing expiration []
>
> 2017-03-20 11:02:31,165 INFO org.apache.hadoop.hbase.master.handler.ServerShutdownHandler:
> Splitting logs for before assignment.
>
>
> Thanks,
>
> Minwoo.
>
> ________________________________
> º¸³½ »ç¶÷: Ted Yu <yuzhihong@gmail.com>
> º¸³½ ³¯Â¥: 2017³â 3¿ù 20ÀÏ ¿ù¿äÀÏ ¿ÀÈÄ 12:10:46
> ¹Þ´Â »ç¶÷: user@hbase.apache.org
> Á¦¸ñ: Re: Why IOException occur when region server is closing
> (CloseRegionHandler.java#L110)?
>
> See HBASE-4270
>
> Did you see this happen in your cluster ?
> If so, mind sharing related log snippets ?
>
> Cheers
>
> On Sun, Mar 19, 2017 at 7:50 PM, Kang Minwoo <minwoo.kang@outlook.com>
> wrote:
>
> > Hello!
> >
> > In this code (https://github.com/apache/hbase/blob/master/hbase-
> > server/src/main/java/org/apache/hadoop/hbase/regionserver/handler/
> > CloseRegionHandler.java#L110),
> > Region server can occur IOException, When they are closing.
> > Why IOException occur here?
> > If I want to know specific reason, Where I should check?
> >
> > Thanks,
> > Minwoo.
> >
> >
>
|