nutch-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Markus Jelsma <markus.jel...@openindex.io>
Subject RE: Nutch Crawl a Specific List Of URLs (150K)
Date Fri, 03 Jan 2014 11:12:23 GMT
Hi - Are they exact duplicates? If you inject http://nutch.apache.org/ a thousand times, it
is added only once, and crawled only once, until it is scheduled to crawl again.

-----Original message-----
From: Bin Wang<binwang.cu@gmail.com>
Sent: Thursday 2nd January 2014 23:13
To: dev@nutch.apache.org
Subject: Re: Nutch Crawl a Specific List Of URLs (150K)

Thanks for all the response, they are very inspiring and diving into the log level is very
beneficial to learn Nutch.

The fact is that I use Python BeautifulSoup to parse the sitemap of my targeted website, which
comes up with those 150K URLs, however, it turned out that there are many many duplicates
which actually in the end turned out to be 900 distinct URLs.

And Nutch is smart enough to filter out those duplicates and come up with 900 before hitting
their websites.

On Mon, Dec 30, 2013 at 4:13 AM, Markus Jelsma <markus.jelsma@openindex.io <mailto:markus.jelsma@openindex.io>>
wrote:

Hi,

You ran one crawl cycle. Depending on the generator and fetcher settings you are not guaranteerd
to fetch 200.000 URLs with only topN specified. Check the logs, the generator will tell you
if there are too many URLs for a host or domain. Also check all fetcher logs, it will tell
you how much it crawled and why it likely stopped when it did.

Cheers

-----Original message-----

From: Bin Wang<binwang.cu@gmail.com <mailto:binwang.cu@gmail.com>>

Sent: Friday 27th December 2013 19:50

To: dev@nutch.apache.org <mailto:dev@nutch.apache.org>

Subject: Nutch Crawl a Specific List Of URLs (150K)

Hi,

I have a very specific list of URLs, which is about 140K URLs.

I switch off the `db.update.additions.allowed` so it will not update the crawldb... and I
was assuming I can feed all the URLs to Nutch, and after one round of fetching, it will finish
and leave all the raw HTML files in the segment folder.

However, after I run this command:

nohup bin/nutch crawl urls -dir result -depth 1 -topN 200000 &

It ended up with a small number of URLs..

TOTAL urls:     872

retry 0:        872

min score:      1.0

avg score:      1.0

max score:      1.0

And I double check the log to make sure that every url can pass the filter and normalization.
And here is the log:

2013-12-27 17:55:25,068 INFO  crawl.Injector - Injector: total number of urls rejected by
filters: 0

2013-12-27 17:55:25,069 INFO  crawl.Injector - Injector: total number of urls injected after
normalization and filtering: 139058

2013-12-27 17:55:25,069 INFO  crawl.Injector - Injector: Merging injected urls into crawl
db.

I dont know how 140K URLs ended up being 872 in the end...

/usr/bin

----------------------

AWS ubuntu instance

Nutch 1.7

java version "1.6.0_27"

OpenJDK Runtime Environment (IcedTea6 1.12.6) (6b27-1.12.6-1ubuntu0.12.04.4)

OpenJDK 64-Bit Server VM (build 20.0-b12, mixed mode)



Mime
View raw message