nutch-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Hudson (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (NUTCH-656) DeleteDuplicates based on crawlDB only
Date Fri, 15 Nov 2013 01:22:17 GMT

    [ https://issues.apache.org/jira/browse/NUTCH-656?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13823182#comment-13823182
] 

Hudson commented on NUTCH-656:
------------------------------

SUCCESS: Integrated in Nutch-trunk #2421 (See [https://builds.apache.org/job/Nutch-trunk/2421/])
NUTCH-656 Generic Deduplicator (jnioche, snagel) (jnioche: http://svn.apache.org/viewvc/nutch/trunk/?view=rev&rev=1541883)
* /nutch/trunk/CHANGES.txt
* /nutch/trunk/src/bin/crawl
* /nutch/trunk/src/bin/nutch
* /nutch/trunk/src/java/org/apache/nutch/crawl/CrawlDatum.java
* /nutch/trunk/src/java/org/apache/nutch/crawl/DeduplicationJob.java
* /nutch/trunk/src/java/org/apache/nutch/indexer/CleaningJob.java
* /nutch/trunk/src/java/org/apache/nutch/indexer/IndexerMapReduce.java


> DeleteDuplicates based on crawlDB only 
> ---------------------------------------
>
>                 Key: NUTCH-656
>                 URL: https://issues.apache.org/jira/browse/NUTCH-656
>             Project: Nutch
>          Issue Type: Wish
>          Components: indexer
>            Reporter: Julien Nioche
>            Assignee: Julien Nioche
>         Attachments: NUTCH-656.patch, NUTCH-656.v2.patch, NUTCH-656.v3.patch
>
>
> The existing dedup functionality relies on Lucene indices and can't be used when the
indexing is delegated to SOLR.
> I was wondering whether we could use the information from the crawlDB instead to detect
URLs to delete then do the deletions in an indexer-neutral way. As far as I understand the
content of the crawlDB contains all the elements we need for dedup, namely :
> * URL 
> * signature
> * fetch time
> * score
> In map-reduce terms we would have two different jobs : 
> * read crawlDB and compare on URLs : keep only most recent element - oldest are stored
in a file and will be deleted later
> * read crawlDB and have a map function generating signatures as keys and URL + fetch
time +score as value
> * reduce function would depend on which parameter is set (i.e. use signature or score)
and would output as list of URLs to delete
> This assumes that we can then use the URLs to identify documents in the indices.
> Any thoughts on this? Am I missing something?
> Julien



--
This message was sent by Atlassian JIRA
(v6.1#6144)

Mime
View raw message