cayenne-commits mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Andrus Adamchik (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (CAY-1782) Deadlock when performing many concurrent inserts
Date Mon, 24 Dec 2012 14:40:12 GMT

    [ https://issues.apache.org/jira/browse/CAY-1782?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13539275#comment-13539275
] 

Andrus Adamchik commented on CAY-1782:
--------------------------------------

Now I fully understand the deadlock. Indeed things look pretty bad if you have a starved connection
pool.

A few ideas re: the new implementation:

                long val = longPkFromDatabase(node, entity);
                Queue<Long> nextPks = mkRange(val, val + cacheSize - 1);
                int iterations = 0;
                while (!pkCache.replace(entity.getName(), pks, nextPks) && iterations
< 999) {
                        pks = pkCache.get(entity.getName()); // the cache for this entity
has changed, so re-fetch it then update
                        Queue<Long> previousPlusNext = new ConcurrentLinkedQueue<Long>(pks);
                        previousPlusNext.addAll(nextPks);
                        nextPks = previousPlusNext;
                        iterations++;
                }
                if (iterations >= 999) {
                        throw new IllegalStateException("Unable to add new primary keys to
the cache for entity " + entity.getName());
                }
                pks = nextPks;
            }

            value = pks.poll();

1. Since we have a concurrent queue, do we have to clone and replace it every time? I think
when we get a new range of keys, we can simply append it to the end of the existing queue
(based on the earlier assertion that the order of keys is irrelevant).

2. The thread that hit 'longPkFromDatabase', can actually immediately grab the returned value
for itself, and then append to the queue the remaining part of the range. This way the thread
that called 'longPkFromDatabase' will never be left without a PK (if say the queue is drained
faster than we are able to call 'poll'), and then also we avoid an extra pair of add/poll
calls.
                
> Deadlock when performing many concurrent inserts
> ------------------------------------------------
>
>                 Key: CAY-1782
>                 URL: https://issues.apache.org/jira/browse/CAY-1782
>             Project: Cayenne
>          Issue Type: Bug
>          Components: Core Library
>    Affects Versions: 3.0, 3.1 (final), 3.2M1
>            Reporter: John Huss
>            Assignee: John Huss
>             Fix For: 3.2M1
>
>
> I've encountered a deadlock issue in production in an app performing many INSERTs.  The
deadlock was between the PK generator and the PoolManager (getting a DB connection).  It is
very bad.  I added a unit test demonstrating the problem and a fix for it.
> The fix is possibly not ideal because it requires a larger data structure for holding
the cached primary keys, but it is far better than the previous behavior.
> If this fix is acceptable this should be back-ported to 3.1 as well.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

Mime
View raw message