phoenix-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Chao Wang (Jira)" <>
Subject [jira] [Updated] (PHOENIX-5860) Throw exception which region is closing or splitting when delete data
Date Fri, 28 Aug 2020 03:09:00 GMT


Chao Wang updated PHOENIX-5860:
    Attachment: PHOENIX-5860.4.x.patch

> Throw exception which region is closing or splitting when delete data
> ---------------------------------------------------------------------
>                 Key: PHOENIX-5860
>                 URL:
>             Project: Phoenix
>          Issue Type: Bug
>          Components: core
>    Affects Versions: 4.13.1
>            Reporter: Chao Wang
>            Assignee: Chao Wang
>            Priority: Blocker
>             Fix For: 4.x
>         Attachments: PHOENIX-5860.4.13.x-HBASE.1.3.x.002.patch, PHOENIX-5860.4.x.patch
>          Time Spent: 0.5h
>  Remaining Estimate: 0h
> Currently delete data is UngroupedAggregateRegionObserver class  on server side, this
class check if isRegionClosingOrSplitting is true. when isRegionClosingOrSplitting is true,
will throw new IOException("Temporarily unable to write from scan because region is closing
or splitting"). 
> when region online , which initialize phoenix CP that isRegionClosingOrSplitting  is
false.before region split, region change  isRegionClosingOrSplitting to true.but if region
split failed,split will roll back where not change   isRegionClosingOrSplitting  to false.
after that all write  opration will always throw exception which is Temporarily unable to
write from scan because region is closing or splitting。
> so we should change isRegionClosingOrSplitting   to false  when region preRollBackSplit
in UngroupedAggregateRegionObserver class。
> A simple test where a data table split failed, then roll back success.but delete data
always throw exception.
>  # create data table 
>  # bulkload data for this table
>  # alter hbase-server code, which region split will throw exception , then rollback.
>  # use hbase shell , split region
>  # view regionserver log, where region split failed, and then rollback success.
>  # user phoenix for delete data, which  will throw exption
>  Caused by: Temporarily unable to write from scan because region
is closing or splitting Caused by: Temporarily unable to write from scan
because region is closing or splitting at org.apache.phoenix.coprocessor.UngroupedAggregateRegionObserver.doPostScannerOpen(
at org.apache.phoenix.coprocessor.BaseScannerRegionObserver$RegionScannerHolder.overrideDelegate(
at org.apache.phoenix.coprocessor.BaseScannerRegionObserver$RegionScannerHolder.nextRaw(
at org.apache.hadoop.hbase.regionserver.RSRpcServices.scan( at org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(
... 5 more
> at org.apache.phoenix.util.ServerUtil.parseServerException( at org.apache.phoenix.iterate.BaseResultIterators.getIterators(
at org.apache.phoenix.iterate.ConcatResultIterator.getIterators(
at org.apache.phoenix.iterate.ConcatResultIterator.currentIterator(
at at
at org.apache.phoenix.compile.DeleteCompiler$2.execute( at org.apache.phoenix.jdbc.PhoenixStatement$
at org.apache.phoenix.jdbc.PhoenixStatement$ at
at org.apache.phoenix.jdbc.PhoenixStatement.executeMutation( at
at com.huawei.mds.apps.ecidRepeatProcess.EcidProcessCommon$$anonfun$40$$anonfun$apply$19.apply(EcidProcessCommon.scala:2253)
at com.huawei.mds.apps.ecidRepeatProcess.EcidProcessCommon$$anonfun$40$$anonfun$apply$19.apply(EcidProcessCommon.scala:2249)
at scala.collection.Iterator$class.foreach(Iterator.scala:893) at org.apache.spark.InterruptibleIterator.foreach(InterruptibleIterator.scala:28)
at com.huawei.mds.apps.ecidRepeatProcess.EcidProcessCommon$$anonfun$40.apply(EcidProcessCommon.scala:2249)
at com.huawei.mds.apps.ecidRepeatProcess.EcidProcessCommon$$anonfun$40.apply(EcidProcessCommon.scala:2243)
at org.apache.spark.rdd.RDD$$anonfun$mapPartitions$1$$anonfun$apply$23.apply(RDD.scala:798)
at org.apache.spark.rdd.RDD$$anonfun$mapPartitions$1$$anonfun$apply$23.apply(RDD.scala:798)
at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38) at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:288) at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87)
at at org.apache.spark.executor.Executor$
at java.util.concurrent.ThreadPoolExecutor.runWorker( at java.util.concurrent.ThreadPoolExecutor$

This message was sent by Atlassian Jira

View raw message