hbase-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Anoop John <anoop.hb...@gmail.com>
Subject Re: HBase 0.98.1 batch Increment throws OperationConflictException
Date Wed, 17 Sep 2014 14:03:38 GMT
Yes that is also possible..   So in such a case this new behavior telling
the issue clearly. In the past the retry op would have silently succeeded
giving a wrong result overall!!!

-Anoop-

On Wed, Sep 17, 2014 at 7:14 PM, Vin Gup <vingup2005@yahoo.com.invalid>
wrote:

> Ok. I will try with your suggestions but I see this error even with
> batches with no row key duplicates. I still suspect that client is timing
> out and retrying too often and needs to back off as the region server is
> heavily loaded.
>
> -Vinay
>
> > On Sep 17, 2014, at 3:14 AM, Anoop John <anoop.hbase@gmail.com> wrote:
> >
> > This is an improvement (rather an issue fix) done from 0.98+ versions.
> > This is for non-idempotent operations (like increment) which HBase
> clients
> > might retry on failure.  Such retry can give wrong results (possibly
> > incrementing 2 times for one increment op)
> >
> > Can you change your application side code so as to avoid possible
> multiple
> > increment for same keys in one batch. Those can be combined to one
> > increment.
> >
> > You can turn off this improvement using config
> "hbase.client.nonces.enabled"
> > (Configure at client side xml file). But this is not a recommended way.
> U
> > can check with this.
> >
> > -Anoop-
> >
> >
> > On Wed, Sep 17, 2014 at 1:04 PM, Vin Gup <vingup2005@yahoo.com.invalid>
> > wrote:
> >
> >> Yes possibly. Why would that be a problem?
> >> Earlier client (0.94) didn't complain about it.
> >>
> >> Thanks,
> >> -Vinay
> >>
> >>> On Sep 17, 2014, at 12:16 AM, Anoop John <anoop.hbase@gmail.com>
> wrote:
> >>>
> >>> You have more than one increment for the same key in one batch?
> >>>
> >>> On Wed, Sep 17, 2014 at 12:33 PM, Vinay Gupta
> >> <vingup2005@yahoo.com.invalid>
> >>> wrote:
> >>>
> >>>> Also the regionserver keeps throwing exceptions like
> >>>>
> >>>> 2014-09-17 06:56:07,151 DEBUG [RpcServer.handler=10,port=60020]
> >>>> regionserver.ServerNonceManager: Conflict detected by nonce:
> >>>> [43871278468062569
> >>>> 89:2793719453824938427], [state 0, hasWait false, activity
> 06:55:41.091]
> >>>> 2014-09-17 06:56:07,151 DEBUG [RpcServer.handler=10,port=60020]
> >>>> regionserver.ServerNonceManager: Conflict detected by nonce:
> >>>> [43871278468062569
> >>>> 89:843474753753473839], [state 0, hasWait false, activity
> 06:55:41.094]
> >>>>
> >>>>
> >>>> Are we sending data too fast? Is there a client side setting or a
> server
> >>>> side setting we need to look at to alleviate this?
> >>>> Again this was never a problem with HBase 0.94 cluster.
> >>>>
> >>>> We are calling the batch API in a List<> of 1000 increment and
we do
> >>>> approx 30000 Increments (30 batches) at a time.
> >>>>
> >>>>
> >>>> -Vinay
> >>>>
> >>>>
> >>>> On Sep 16, 2014, at 11:24 PM, Vinay Gupta
> <vingup2005@yahoo.com.INVALID
> >>>
> >>>> wrote:
> >>>>
> >>>>>
> >>>>>>
> >>>>>> Hi,
> >>>>>> We are using Hbase batch API and with 0.98.1 we get the following
> >>>> exception on using batch() with Increment
> >>>>>> ————————————
> >>>>>> org.apache.hadoop.hbase.exceptions.OperationConflictException:
The
> >>>> operation with nonce {5266048044724982303, 5395957753774586342} on row
> >>>> [rowkey13-20140331] may have already completed
> >>>>>>   at
> >>
> org.apache.hadoop.hbase.regionserver.HRegionServer.startNonceOperation(HRegionServer.java:4199)
> >>>>>>   at
> >>
> org.apache.hadoop.hbase.regionserver.HRegionServer.increment(HRegionServer.java:4163)
> >>>>>>   at
> >>
> org.apache.hadoop.hbase.regionserver.HRegionServer.doNonAtomicRegionMutation(HRegionServer.java:3424)
> >>>>>>   at
> >>
> org.apache.hadoop.hbase.regionserver.HRegionServer.multi(HRegionServer.java:3359)
> >>>>>>   at
> >>
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:29503)
> >>>>>>   at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2012)
> >>>>>>   at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:98)
> >>>>>>   at
> >>
> org.apache.hadoop.hbase.ipc.SimpleRpcScheduler.consumerLoop(SimpleRpcScheduler.java:160)
> >>>>>>   at
> >>
> org.apache.hadoop.hbase.ipc.SimpleRpcScheduler.access$000(SimpleRpcScheduler.java:38)
> >>>>>>   at
> >>
> org.apache.hadoop.hbase.ipc.SimpleRpcScheduler$1.run(SimpleRpcScheduler.java:110)
> >>>>>>   at java.lang.Thread.run(Thread.java:745)
> >>>>>> ————————————
> >>>>>> Eventually the job fails with
> >>>>>> "Error:
> >>>> org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException"
> >>>>>>
> >>>>>> The same job works in Hbase 0.94 installation. Any tips on which
> >> config
> >>>> settings to play with to resolve this?
> >>>>>> Is the application supposed to handle these exceptions? (something
> new
> >>>> in Hbase 0.98 or 0.96 ??)
> >>>>>>
> >>>>>> Thanks,
> >>>>>> -Vinay
> >>
>

Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message