nutch-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Markus Jelsma (JIRA)" <>
Subject [jira] [Commented] (NUTCH-1052) Multiple deletes of the same URL using SolrClean
Date Mon, 12 Sep 2011 20:33:09 GMT


Markus Jelsma commented on NUTCH-1052:

Although a delete doesn't take much space in the buffer there is a potential of thousands
of deletes stacking up; deletes should increment the counter indeed.

Redirects (perm and temp moves) are another problem. During indexing we don't know if a URL
has become a redirect. The only solution would be to treat them te same as db_gone. This can
lead to a significant number of useless deletes but the same is true for db_gone anyway. Solr,
at least, doesn't waste too many cycles on useles delete actions.

I do need another committer's comments on the abuse of the RecordWriter. It works alright
but doesn't feel right. A possible solution would be to use a small struct that holds the
document and the index/delete flag. It is not possible to pass more parameters than key/value.

> Multiple deletes of the same URL using SolrClean
> ------------------------------------------------
>                 Key: NUTCH-1052
>                 URL:
>             Project: Nutch
>          Issue Type: Improvement
>          Components: indexer
>    Affects Versions: 1.3, 1.4
>            Reporter: Tim Pease
>            Assignee: Markus Jelsma
>            Priority: Minor
>             Fix For: 1.4, 2.0
>         Attachments: NUTCH-1052-1.4-1.patch, NUTCH-1052-1.4-2.patch
> The SolrClean class does not keep track of purged URLs, it only checks the URL status
for "db_gone". When run multiple times the same list of URLs will be deleted from Solr. For
small, stable crawl databases this is not a problem. For larger crawls this could be an issue.
SolrClean will become an expensive operation.
> One solution is to add a "purged" flag in the CrawlDatum metadata. SolrClean would then
check this flag in addition to the "db_gone" status before adding the URL to the delete list.
> Another solution is to add a new state to the status field "db_gone_and_purged".
> Either way, the crawl DB will need to be updated after the Solr delete has successfully

This message is automatically generated by JIRA.
For more information on JIRA, see:


View raw message