cayenne-commits mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Andrus Adamchik (JIRA)" <j...@apache.org>
Subject [jira] [Comment Edited] (CAY-1782) Deadlock when performing many concurrent inserts
Date Wed, 19 Dec 2012 08:45:23 GMT

    [ https://issues.apache.org/jira/browse/CAY-1782?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13535774#comment-13535774
] 

Andrus Adamchik edited comment on CAY-1782 at 12/19/12 8:43 AM:
----------------------------------------------------------------

I can't reproduce a deadlock per se, only connection pool exhaustion (so if you have stack
traces to prove an actual deadlock, appreciate you posting them), but I am all for removing
synchronization from the PK generator. Big +1 on the effort.

> The fix is possibly not ideal because it requires a larger data structure for holding
the cached primary keys, but it is far better than the previous behavior. 

This is indeed a performance-sensitive code. Can we use AtomicLong (or a wrapper around AtomicLong)
to store a range of ids? As an added benefit it won't require synchronization when retrieving
a key.

(I edited this comment to remove my initial concerns about our attempts to avoid gaps... this
may be really not about avoiding gaps, but minimizing DB trips for pk generation).
                
      was (Author: andrus):
    I can't reproduce a deadlock per se, only connection pool exhaustion (so if you have stack
traces to proved an actual deadlock, appreciate you posting them), but I am all for removing
synchronization from the PK generator. Big +1 on the effort.

> The fix is possibly not ideal because it requires a larger data structure for holding
the cached primary keys, but it is far better than the previous behavior. 

Can we use AtomicLong instead of a Set to store a range? 

Lots of effort here is made to not lose any PK range, and to order PKs. I don't worry too
much about either. It is ok if we skip over a value of PK here and there and create a gap.
Also PK must be unique, but it doesn't have to be globally sequential. In fact we can't guarantee
a certain PK insert order anyways, e.g. between multiple JVMs on the cluster, or even in your
range replacement algorithm, where we append our range to another range.

Also few questions/comments on the Set-based implementation:

1. Should we synchronize on 'pks' here instead of 'pkCache'  ?

synchronized (pkCache) {
   value = pks.first();
   pks.remove(value);
}


2.  while (!pkCache.replace(entity.getName(), pks, nextPks))

I suggest we add some last resort exit condition in case our thread never gets a chance to
set its PK. Maybe a for loop from 0 to 999, and then throw an exception? Or maybe we just
drop our range if somebody else already created another range...


                  
> Deadlock when performing many concurrent inserts
> ------------------------------------------------
>
>                 Key: CAY-1782
>                 URL: https://issues.apache.org/jira/browse/CAY-1782
>             Project: Cayenne
>          Issue Type: Bug
>          Components: Core Library
>    Affects Versions: 3.0, 3.1 (final), 3.2M1
>            Reporter: John Huss
>            Assignee: John Huss
>             Fix For: 3.2M1
>
>
> I've encountered a deadlock issue in production in an app performing many INSERTs.  The
deadlock was between the PK generator and the PoolManager (getting a DB connection).  It is
very bad.  I added a unit test demonstrating the problem and a fix for it.
> The fix is possibly not ideal because it requires a larger data structure for holding
the cached primary keys, but it is far better than the previous behavior.
> If this fix is acceptable this should be back-ported to 3.1 as well.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

Mime
View raw message