nutch-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Andrzej Bialecki ...@getopt.org>
Subject Re: Google performance bottlenecks ;-) (Re: Lucene performance bottlenecks)
Date Mon, 12 Dec 2005 09:58:41 GMT
Dawid Weiss wrote:

>
> Hi Andrzej,
>
> This was a very interesting experiment -- thanks for sharing the 
> results with us.
>
>> The last range was the maximum in this case - Google wouldn't display 
>> any hit above 652 (which I find curious, too - because the total 
>> number of hits is, well, significantly higher - and Google claims to 
>> return up to the first 1000 results).
>
>
> I believe this may have something to do with the way Google compacts 
> URLs. My guess is that initially a 1000 results is found and ranked. 
> Then pruning is performed on that, leaving just a subset of results 
> for the user to select from.
>

That was my guess, too ...

> Sorry, my initial intuition proved wrong -- there is no clear logic 
> behind the maximum limit of results you can see (unless you can find 
> some logic in the fact that I can see _more_ results when I _exclude_ 
> repeated ones from the total).


Well, trying not to sound too much like Spock... Fascinating :-), but 
the only logical conclusion is that at the user end we never deal with 
any hard results calculated directly from the hypothetical "main index", 
we deal just with rough estimates from the "estimated indexes". These 
change in time, and perhaps even with the group of servers that answered 
this particular query... My guess is that there could be different 
"estimated" indexes prepared for different values of the main boolean 
parameters, like filter=0...

-- 
Best regards,
Andrzej Bialecki     <><
 ___. ___ ___ ___ _ _   __________________________________
[__ || __|__/|__||\/|  Information Retrieval, Semantic Web
___|||__||  \|  ||  |  Embedded Unix, System Integration
http://www.sigram.com  Contact: info at sigram dot com



Mime
View raw message