lucene-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Cao Manh Dat (JIRA)" <j...@apache.org>
Subject [jira] [Comment Edited] (SOLR-13416) Overseer node getting stuck
Date Mon, 22 Apr 2019 08:46:00 GMT

    [ https://issues.apache.org/jira/browse/SOLR-13416?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16822953#comment-16822953
] 

Cao Manh Dat edited comment on SOLR-13416 at 4/22/19 8:45 AM:
--------------------------------------------------------------

There are an imporvement made by Mark Miller for OverseerTaskProcessor on SOLR-12801. So the
problem may get fixed. But here are some of my investigation about the cause of the problems,
but none of them strong enough.

No.1 hypothesis
 * A task is removed from runningTasks iff the OverseerTaskProcessor.Runner#run() is called.
The task won't be removed from runningTasks in following cases
 ** failure happen on messageHandler.processMessage.
 ** the executor did not be able to enqueue the task to its pool.
 * Therefore runningTasks can keep increasing, if its size hit 100, the OverseerTaskProcessor
will stuck, it won't be able to process any further tasks

No.2 hypothesis
 * The lockTask() call always return null which leads to no task can be processed, this not
likely to happen because
 ** OverseerStatus operation should be able to pass this problem since this operation is a
free lock (don't need a lock for being processed).
 ** The total ops processed by the new Overseer is around 500, which does not reach the limit
(1000) so new operations always have a chance to be fetched.

No.3 hypothesis
 * Since the {{lock}} only get unlocked in these cases
 ** KeeperException.NodeExistsException and InterruptedException is thrown 
 ** When the {{Runner}} get executed
 * Therefore in case of other kinds {{KeeperException}} is thrown or there was a problem that
caused the {{Runner}} to be not executed (like the executor is full), the lock will never
get unlocked.

Another bug that I found here in {{OverseerCollectionMessageHandler.lockTask()}}
 * it always call lockTree.clear() when runningTasks() == 0, since this check sessionId !=
taskBatch.getId() always return true ( sessionId never get updated)
 * this may be the reason why user see lock_is_leaked WARN


was (Author: caomanhdat):
There are an imporvement made by Mark Miller for OverseerTaskProcessor on SOLR-12801. So the
problem may get fixed. But here are some of my investigation about the cause of the problems,
but none of them strong enough.

No.1 hypothesis
 * A task is removed from runningTasks iff the OverseerTaskProcessor.Runner#run() is called.
The task won't be removed from runningTasks in following cases
 ** failure happen on messageHandler.processMessage.
 ** the executor did not be able to enqueue the task to its pool.
 * Therefore runningTasks can keep increasing, if its size hit 100, the OverseerTaskProcessor
will stuck, it won't be able to process any further tasks

No.2 hypothesis
 * The lockTask() call always return null which leads to no task can be processed, this not
likely to happen because

 * 
 ** OverseerStatus operation should be able to pass this problem since this operation is a
free lock (don't need a lock for being processed).
 ** The total ops processed by the new Overseer is around 500, which does not reach the limit
(1000) so new operations always have a chance to be fetched.

No.3 hypothesis
 * Since the {{lock}} only get unlocked in these cases
 ** KeeperException.NodeExistsException and InterruptedException is thrown 
 ** When the {{Runner}} get executed
 * Therefore in case of other kinds {{KeeperException}} is thrown or there was a problem that
caused the {{Runner}} to be not executed (like the executor is full), the lock will never
get unlocked.

Another bug that I found here in {{OverseerCollectionMessageHandler.lockTask()}}
 * it always call lockTree.clear() when runningTasks() == 0, since this check sessionId !=
taskBatch.getId() always return true ( sessionId never get updated)
 * this may be the reason why user see lock_is_leaked WARN

> Overseer node getting stuck
> ---------------------------
>
>                 Key: SOLR-13416
>                 URL: https://issues.apache.org/jira/browse/SOLR-13416
>             Project: Solr
>          Issue Type: Bug
>      Security Level: Public(Default Security Level. Issues are Public) 
>    Affects Versions: 7.2.1
>            Reporter: Cao Manh Dat
>            Assignee: Cao Manh Dat
>            Priority: Major
>
> There is a problem privately reported to me about stucking of Overseer, leading to no
operations get being processed until a new Overseer node being elected. 
> There is an exception was logged
> {code}
> WARN - 2019-03-11 10:11:34.879; org.apache.solr.cloud.LockTree$Node; lock_is_leaked at[item-xref-secondary-stage]
> ERROR - 2019-03-11 10:11:35.002; org.apache.solr.common.SolrException; Collection: item-xref-secondary-stage
operation: delete failed:org.apache.solr.common.SolrException: Could not find collection :
item-xref-secondary-stage
> at org.apache.solr.common.cloud.ClusterState.getCollection(ClusterState.java:111)
> at org.apache.solr.cloud.OverseerCollectionMessageHandler.collectionCmd(OverseerCollectionMessageHandler.java:795)
> at org.apache.solr.cloud.DeleteCollectionCmd.call(DeleteCollectionCmd.java:91)
> at org.apache.solr.cloud.OverseerCollectionMessageHandler.processMessage(OverseerCollectionMessageHandler.java:233)
> at org.apache.solr.cloud.OverseerTaskProcessor$Runner.run(OverseerTaskProcessor.java:464)
> at org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:188)
> at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:745)
> {code}
> This is a serious problem since it can leads to hanging of whole system. 
> Verified:
> 1. GC setting and long GC issues on Solr/ZK - none
> 2. Ulimits (OK): 65535 (-n open files) and nproc
> 3. ZK Quoram working (5 ZKs)
> 4. Checked min/avg/max latencies on the ZK ensemble
> 5. Solr startup parameters



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@lucene.apache.org
For additional commands, e-mail: dev-help@lucene.apache.org


Mime
View raw message