nutch-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Julien Nioche (JIRA)" <j...@apache.org>
Subject [jira] Commented: (NUTCH-963) Add support for deleting Solr documents with STATUS_DB_GONE in CrawlDB (404 urls)
Date Thu, 27 Jan 2011 14:12:45 GMT

    [ https://issues.apache.org/jira/browse/NUTCH-963?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12987574#action_12987574
] 

Julien Nioche commented on NUTCH-963:
-------------------------------------

It would be nice to couple that with the deduplication for SOLR as well. The current mechanism
is not great : it pulls ALL the documents from SOLR, finds the duplicates then issues a delete
command. It would be more efficient to find the duplicates with a mapreduce on the crawldb
then send the deletions to SOLR. We could also delete the urls which are definitly gone at
the same time

> Add support for deleting Solr documents with STATUS_DB_GONE in CrawlDB (404 urls)
> ---------------------------------------------------------------------------------
>
>                 Key: NUTCH-963
>                 URL: https://issues.apache.org/jira/browse/NUTCH-963
>             Project: Nutch
>          Issue Type: New Feature
>          Components: indexer
>    Affects Versions: 2.0
>            Reporter: Claudio Martella
>            Assignee: Markus Jelsma
>            Priority: Minor
>             Fix For: 1.3, 2.0
>
>         Attachments: Solr404Deleter.java
>
>
> When issuing recrawls it can happen that certain urls have expired (i.e. URLs that don't
exist anymore and return 404).
> This patch creates a new command in the indexer that scans the crawldb looking for these
urls and issues delete commands to SOLR.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


Mime
View raw message