lucene-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Mike Schrag (JIRA)" <>
Subject [jira] [Created] (SOLR-5081) Highly parallel document insertion hangs SolrCloud
Date Sat, 27 Jul 2013 20:41:49 GMT
Mike Schrag created SOLR-5081:

             Summary: Highly parallel document insertion hangs SolrCloud
                 Key: SOLR-5081
             Project: Solr
          Issue Type: Bug
          Components: SolrCloud
    Affects Versions: 4.3.1
            Reporter: Mike Schrag

If I do a highly parallel document load using a Hadoop cluster into an 18 node solrcloud cluster,
I can deadlock solr every time.

The ulimits on the nodes are:
core file size          (blocks, -c) 0
data seg size           (kbytes, -d) unlimited
scheduling priority             (-e) 0
file size               (blocks, -f) unlimited
pending signals                 (-i) 1031181
max locked memory       (kbytes, -l) unlimited
max memory size         (kbytes, -m) unlimited
open files                      (-n) 32768
pipe size            (512 bytes, -p) 8
POSIX message queues     (bytes, -q) 819200
real-time priority              (-r) 0
stack size              (kbytes, -s) 10240
cpu time               (seconds, -t) unlimited
max user processes              (-u) 515590
virtual memory          (kbytes, -v) unlimited
file locks                      (-x) unlimited

The open file count is only around 4000 when this happens.

If I bounce all the servers, things start working again, which makes me think this is Solr
and not ZK.

I'll attach the stack trace from one of the servers.

This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see:

To unsubscribe, e-mail:
For additional commands, e-mail:

View raw message