nutch-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "ASF GitHub Bot (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (NUTCH-2456) Allow to index pages/URLs not contained in CrawlDb
Date Wed, 08 Nov 2017 14:47:01 GMT

    [ https://issues.apache.org/jira/browse/NUTCH-2456?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16244070#comment-16244070
] 

ASF GitHub Bot commented on NUTCH-2456:
---------------------------------------

sebastian-nagel commented on a change in pull request #240: NUTCH-2456 - Redirected documents
are not indexed
URL: https://github.com/apache/nutch/pull/240#discussion_r149686109
 
 

 ##########
 File path: src/java/org/apache/nutch/indexer/IndexerMapReduce.java
 ##########
 @@ -309,23 +308,25 @@ public void reduce(Text key, Iterator<NutchWritable> values,
     doc.add("boost", Float.toString(boost));
 
     try {
-      // Indexing filters may also be interested in the signature
-      fetchDatum.setSignature(dbDatum.getSignature());
-      
-      // extract information from dbDatum and pass it to
-      // fetchDatum so that indexing filters can use it
-      final Text url = (Text) dbDatum.getMetaData().get(
-          Nutch.WRITABLE_REPR_URL_KEY);
-      if (url != null) {
-        // Representation URL also needs normalization and filtering.
-        // If repr URL is excluded by filters we still accept this document
-        // but represented by its primary URL ("key") which has passed URL
-        // filters.
-        String urlString = filterUrl(normalizeUrl(url.toString()));
-        if (urlString != null) {
-          url.set(urlString);
-          fetchDatum.getMetaData().put(Nutch.WRITABLE_REPR_URL_KEY, url);
-        }
+      if (dbDatum!=null) {
+	      // Indexing filters may also be interested in the signature
 
 Review comment:
   Code style: indentation 2 spaces per level, no tabs

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


> Allow to index pages/URLs not contained in CrawlDb
> --------------------------------------------------
>
>                 Key: NUTCH-2456
>                 URL: https://issues.apache.org/jira/browse/NUTCH-2456
>             Project: Nutch
>          Issue Type: Bug
>          Components: indexer
>    Affects Versions: 1.13
>            Reporter: Yossi Tamari
>            Priority: Critical
>
> If http.redirect.max is set to a positive value, the Fetcher will follow redirects, creating
a new CrawlDatum.
> If the redirected URL is fetched and parsed, during indexing for it we have a special
case: dbDatum is null. This means that in [https://github.com/apache/nutch/blob/6199492f5e1e8811022257c88dbf63f1e1c739d0/src/java/org/apache/nutch/indexer/IndexerMapReduce.java#L259]
the document is not indexed, as it is assumed it only has inlinks (actually it has everything
but dbDatum).
> I'm not sure what the correct fix is here. It seems to me the condition should use AND
instead of OR anyway, but I may not understand the original intent. It is clear that it is
too strict as is.
> However, the code following that line assumes all 4 objects are not null, so a patch
would need to change more than just the condition.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

Mime
View raw message