lucene-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Uwe Schindler (JIRA)" <>
Subject [jira] Commented: (LUCENE-1997) Explore performance of multi-PQ vs single-PQ sorting API
Date Fri, 23 Oct 2009 07:53:59 GMT


Uwe Schindler commented on LUCENE-1997:

So it does not have something to do with Java 1.5/1.6 but more with 32/64 bit. As most servers
are running 64 bit, I think the new 2.9 search API is fine?

I agree with you, the new API is cleaner at all, the old API could only be reimplemented with
major refactorings, as it does not fit well in multi-segment search.

By the way, I found during refactoring for Java5 some inconsistenceies in MultiSearcher/ParallelMultiSearcher,
which uses FieldDocSortedHitQueue (its used nowhere else anymore): During sorting it uses
when merging the queues of all Searcher some native compareTo operations, which may not work
correct with custom comparators. Is this correct. In my opinion this queue sshould also somehow
use at least the FieldComparator. Mark,  do not understand it completely, but how does this
fit together. I added a warning because of very strange casts in the source code (unsafe casts)
and a SuppressWarnings("unchecked") so its easy to find in FieldDocSortedHitQueue.

> Explore performance of multi-PQ vs single-PQ sorting API
> --------------------------------------------------------
>                 Key: LUCENE-1997
>                 URL:
>             Project: Lucene - Java
>          Issue Type: Improvement
>          Components: Search
>    Affects Versions: 2.9
>            Reporter: Michael McCandless
>            Assignee: Michael McCandless
>         Attachments: LUCENE-1997.patch, LUCENE-1997.patch
> Spinoff from recent "lucene 2.9 sorting algorithm" thread on java-dev,
> where a simpler (non-segment-based) comparator API is proposed that
> gathers results into multiple PQs (one per segment) and then merges
> them in the end.
> I started from John's multi-PQ code and worked it into
> contrib/benchmark so that we could run perf tests.  Then I generified
> the Python script I use for running search benchmarks (in
> contrib/benchmark/
> The script first creates indexes with 1M docs (based on
> SortableSingleDocSource, and based on wikipedia, if available).  Then
> it runs various combinations:
>   * Index with 20 balanced segments vs index with the "normal" log
>     segment size
>   * Queries with different numbers of hits (only for wikipedia index)
>   * Different top N
>   * Different sorts (by title, for wikipedia, and by random string,
>     random int, and country for the random index)
> For each test, 7 search rounds are run and the best QPS is kept.  The
> script runs singlePQ then multiPQ, and records the resulting best QPS
> for each and produces table (in Jira format) as output.

This message is automatically generated by JIRA.
You can reply to this email to add a comment to the issue online.

To unsubscribe, e-mail:
For additional commands, e-mail:

View raw message