lucene-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Bill Bell (JIRA)" <j...@apache.org>
Subject [jira] Issue Comment Edited: (SOLR-2218) Performance of start= and rows= parameters are exponentially slow with large data sets
Date Mon, 03 Jan 2011 04:39:46 GMT

    [ https://issues.apache.org/jira/browse/SOLR-2218?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12976606#action_12976606
] 

Bill Bell edited comment on SOLR-2218 at 1/2/11 11:38 PM:
----------------------------------------------------------

Hoss,

So what you are saying is instead of:

1. http://hostname/solr/select?fl=id&start=20000&rows=1000&q=*:*&sort=id asc

I should use:

LAST_ID=20000
1. http://hostname/solr/select?fl=id&rows=1000&q=*:*&sort=id asc&fq=id:[<LAST_ID>
TO *]

This should definately be faster. Unfortunately, I need the results by highest score. Does
fq support score?

SCORE=5.6
1. http://hostname/solr/select?fl=id,score&rows=1000&q=*:*&sort=score desc&fq=score:[0
TO <SCORE>]

Thoughts?






      was (Author: billnbell):
    Hoss,

So what you are saying is instead of:

1. http://hostname/solr/select?fl=id&start=20000&rows=1000&q=*:*&sort=id asc

I should use:

LAST_ID=20000
1. http://hostname/solr/select?fl=id&rows=1000&q=*:*&sort=id asc&fq=id:[<LAST_ID>
TO *]

This should definately be faster. Unfortunately, I need the results by highest score. Does
fq support score?

SCORE=5.6
1. http://hostname/solr/select?fl=id,score&rows=1000&q=*:*&sort=score desc&fq=id:[0
TO <SCORE>]

Thoughts?





  
> Performance of start= and rows= parameters are exponentially slow with large data sets
> --------------------------------------------------------------------------------------
>
>                 Key: SOLR-2218
>                 URL: https://issues.apache.org/jira/browse/SOLR-2218
>             Project: Solr
>          Issue Type: Improvement
>          Components: Build
>    Affects Versions: 1.4.1
>            Reporter: Bill Bell
>
> With large data sets, > 10M rows.
> Setting start=<large number> and rows=<large numbers> is slow, and gets slower
the farther you get from start=0 with a complex query. Random also makes this slower.
> Would like to somehow make this performance faster for looping through large data sets.
It would be nice if we could pass a pointer to the result set to loop, or support very large
rows=<number>.
> Something like:
> rows=1000
> start=0
> spointer=string_my_query_1
> Then within interval (like 5 mins) I can reference this loop:
> Something like:
> rows=1000
> start=1000
> spointer=string_my_query_1
> What do you think? Since the data is too great the cache is not helping.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@lucene.apache.org
For additional commands, e-mail: dev-help@lucene.apache.org


Mime
View raw message