nutch-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Apache Wiki <wikidi...@apache.org>
Subject [Nutch Wiki] Update of "NutchTutorial" by Frungi
Date Sun, 04 Dec 2011 06:46:17 GMT
Dear Wiki user,

You have subscribed to a wiki page or wiki category on "Nutch Wiki" for change notification.

The "NutchTutorial" page has been changed by Frungi:
http://wiki.apache.org/nutch/NutchTutorial?action=diff&rev1=53&rev2=54

Comment:
capitalizing things that need to be capitalized, `monospacing` commands and filenames and
such that otherwise aren’t set off from the prose

  ## page was renamed from RunningNutchAndSolr
  ## Lang: En
  == Introduction ==
- Apache Nutch is an open source web crawler written in Java. By using it, we can find web
page hyperlinks in an automated manner, reduce lots of maintenance work, for example checking
broken links, and create a copy of all the visited pages for searching over. That’s where
Apache Solr comes in. Solr is an open source full text search framework, with Solr we can
search the visited pages from Nutch. Luckily, integration between Nutch and Solr is pretty
straightforward as explained below.
+ Apache Nutch is an open source Web crawler written in Java. By using it, we can find Web
page hyperlinks in an automated manner, reduce lots of maintenance work, for example checking
broken links, and create a copy of all the visited pages for searching over. That’s where
Apache Solr comes in. Solr is an open source full text search framework, with Solr we can
search the visited pages from Nutch. Luckily, integration between Nutch and Solr is pretty
straightforward as explained below.
  
  Apache Nutch release 1.3 has Solr integration embedded, greatly simplifying Nutch-Solr integration.
It also removes the legacy dependence upon both Apache Tomcat for running the old Nutch Web
Application and upon Apache Lucene for indexing. Just download a 1.3 binary release from [[http://www.apache.org/dyn/closer.cgi/nutch/|here]].
  
@@ -14, +14 @@

  
  == Steps ==
  == 1 Setup Nutch from binary distribution ==
-  * Unzip your binary Nutch package to $HOME/nutch-1.3
+  * Unzip your binary Nutch package to `$HOME/nutch-1.3`
-  * cd $HOME/nutch-1.3/runtime/local
+  * `cd $HOME/nutch-1.3/runtime/local`
  
- From now on, we are going to use ${NUTCH_RUNTIME_HOME} to refer to the current directory.
+ From now on, we are going to use `${NUTCH_RUNTIME_HOME}` to refer to the current directory.
  
  == 2. Verify your Nutch installation ==
-  * run "bin/nutch" - You can confirm a correct installation if you seeing the following:
+  * run "`bin/nutch`" - You can confirm a correct installation if you seeing the following:
  
  {{{
  Usage: nutch [-core] COMMAND
@@ -32, +32 @@

  {{{
  chmod +x bin/nutch
  }}}
-  * Setup JAVA_HOME if you are seeing JAVA_HOME not set. On Mac, you can run the following
command or add it to ~/.bashrc:
+  * Setup `JAVA_HOME` if you are seeing `JAVA_HOME` not set. On Mac, you can run the following
command or add it to `~/.bashrc`:
  
  {{{
  export JAVA_HOME=/System/Library/Frameworks/JavaVM.framework/Versions/1.6/Home
  }}}
  == 3. Crawl your first website ==
-  * Add your agent name in the value field of the http.agent.name property in conf/nutch-site.xml,
for example:
+  * Add your agent name in the `value` field of the `http.agent.name` property in `conf/nutch-site.xml`,
for example:
  
  {{{
  <property>
@@ -46, +46 @@

   <value>My Nutch Spider</value>
  </property>
  }}}
-  * mkdir -p urls
+  * `mkdir -p urls`
-  * create a text file nutch under /urls with the following content (1 url per line for each
site you want Nutch to crawl).
+  * create a text file `nutch` under `/urls` with the following content (one URL per line
for each site you want Nutch to crawl).
  
  {{{
  http://nutch.apache.org/
  }}}
- * Edit the file conf/regex-urlfilter.txt and replace
+ * Edit the file `conf/regex-urlfilter.txt` and replace
  
  {{{
  # accept anything else
  +.
  }}}
- with a regular expression matching the domain you wish to crawl. For example, if you wished
to limit the crawl to the nutch.apache.org domain, the line should read:
+ with a regular expression matching the domain you wish to crawl. For example, if you wished
to limit the crawl to the `nutch.apache.org` domain, the line should read:
  
  {{{
   +^http://([a-z0-9]*\.)*nutch.apache.org/
  }}}
- This will include any url in the domain nutch.apache.org.
+ This will include any URL in the domain `nutch.apache.org`.
  
  === 3.1 Using the Crawl Command ===
  Now we are ready to initiate a crawl, use the following parameters:
@@ -81, +81 @@

  
  {{{
  crawl/crawldb
- Crawl/linkdb
+ crawl/linkdb
  crawl/segments
  }}}
- '''NOTE''': If you have a Solr core already set up and wish to index to it, you are required
to add the -solr <solrUrl> parameter to your crawl command e.g.
+ '''NOTE''': If you have a Solr core already set up and wish to index to it, you are required
to add the `-solr <solrUrl> parameter` to your `crawl` command e.g.
  
  {{{
  bin/nutch crawl urls -solr http://localhost:8983/solr/ -depth 3 -topN 5
  }}}
  If not then please skip to [[#A4._Setup_Solr_for_search|here]] for how to set up your Solr
instance and index your crawl data.
  
- Typically one starts testing one's configuration by crawling at shallow depths, sharply
limiting the number of pages fetched at each level (-topN), and watching the output to check
that desired pages are fetched and undesirable pages are not. Once one is confident of the
configuration, then an appropriate depth for a full crawl is around 10. The number of pages
per level (-topN) for a full crawl can be from tens of thousands to millions, depending on
your resources.
+ Typically one starts testing one's configuration by crawling at shallow depths, sharply
limiting the number of pages fetched at each level (`-topN`), and watching the output to check
that desired pages are fetched and undesirable pages are not. Once one is confident of the
configuration, then an appropriate depth for a full crawl is around 10. The number of pages
per level (`-topN`) for a full crawl can be from tens of thousands to millions, depending
on your resources.
  
- === 3.2 Using Individual Commands for Whole-web Crawling ===
+ === 3.2 Using Individual Commands for Whole-Web Crawling ===
- '''NOTE''': If you previously modified the file conf/regex-urlfilter.txt as as covered [[#A3._Crawl_your_first_website|here]]
you will need to change it back.
+ '''NOTE''': If you previously modified the file `conf/regex-urlfilter.txt` as covered [[#A3._Crawl_your_first_website|here]]
you will need to change it back.
  
- Whole-web crawling is designed to handle very large crawls which may take weeks to complete,
running on multiple machines.  This also permits more control over the crawl process, and
incremental crawling.  It is important to note that whole web crawling does not necessarily
mean crawling the entire world wide web.  We can limit a whole web crawl to just a list of
the URLs we want to crawl.  This is done by using a filter just like we the one we used when
we did the crawl command (above).
+ Whole-Web crawling is designed to handle very large crawls which may take weeks to complete,
running on multiple machines.  This also permits more control over the crawl process, and
incremental crawling.  It is important to note that whole Web crawling does not necessarily
mean crawling the entire World Wide Web.  We can limit a whole Web crawl to just a list of
the URLs we want to crawl.  This is done by using a filter just like we the one we used when
we did the `crawl` command (above).
  
  ==== Step-by-Step: Concepts ====
  Nutch data is composed of:
  
-  1. The crawl database, or crawldb. This contains information about every url known to Nutch,
including whether it was fetched, and, if so, when.
+  1. The crawl database, or crawldb. This contains information about every URL known to Nutch,
including whether it was fetched, and, if so, when.
-  1. The link database, or linkdb. This contains the list of known links to each url, including
both the source url and anchor text of the link.
+  1. The link database, or linkdb. This contains the list of known links to each URL, including
both the source URL and anchor text of the link.
-  1. A set of segments. Each segment is a set of urls that are fetched as a unit. Segments
are directories with the following subdirectories:
+  1. A set of segments. Each segment is a set of URLs that are fetched as a unit. Segments
are directories with the following subdirectories:
-   * a ''crawl_generate'' names a set of urls to be fetched
+   * a ''crawl_generate'' names a set of URLs to be fetched
-   * a ''crawl_fetch'' contains the status of fetching each url
+   * a ''crawl_fetch'' contains the status of fetching each URL
-   * a ''content'' contains the raw content retrieved from each url
+   * a ''content'' contains the raw content retrieved from each URL
-   * a ''parse_text'' contains the parsed text of each url
+   * a ''parse_text'' contains the parsed text of each URL
-   * a ''parse_data'' contains outlinks and metadata parsed from each url
+   * a ''parse_data'' contains outlinks and metadata parsed from each URL
-   * a ''crawl_parse'' contains the outlink urls, used to update the crawldb
+   * a ''crawl_parse'' contains the outlink URLs, used to update the crawldb
  
- ==== Step-by-Step: Seeding the Crawl DB with a list of URLS ====
+ ==== Step-by-Step: Seeding the crawldb with a list of URLs ====
  ===== Option 1:  Bootstrapping from the DMOZ database. =====
- The injector adds urls to the crawldb. Let's inject URLs from the DMOZ Open Directory. First
we must download and uncompress the file listing all of the DMOZ pages. (This is a 200+Mb
file, so this will take a few minutes.)
+ The injector adds URLs to the crawldb. Let's inject URLs from the DMOZ Open Directory. First
we must download and uncompress the file listing all of the DMOZ pages. (This is a 200+ MB
file, so this will take a few minutes.)
  
  {{{
  wget http://rdf.dmoz.org/rdf/content.rdf.u8.gz
  gunzip content.rdf.u8.gz
  }}}
- Next we select a random subset of these pages. (We use a random subset so that everyone
who runs this tutorial doesn't hammer the same sites.) DMOZ contains around three million
URLs. We select one out of every 5000, so that we end up with around 1000 URLs:
+ Next we select a random subset of these pages. (We use a random subset so that everyone
who runs this tutorial doesn't hammer the same sites.) DMOZ contains around three million
URLs. We select one out of every 5,000, so that we end up with around 1,000 URLs:
  
  {{{
  mkdir dmoz
  bin/nutch org.apache.nutch.tools.DmozParser content.rdf.u8 -subset 5000 > dmoz/urls
  }}}
- The parser also takes a few minutes, as it must parse the full file. Finally, we initialize
the crawl db with the selected urls.
+ The parser also takes a few minutes, as it must parse the full file. Finally, we initialize
the crawldb with the selected URLs.
  
  {{{
  bin/nutch inject crawldb dmoz
  }}}
- Now we have a web database with around 1000 as-yet unfetched URLs in it.
+ Now we have a Web database with around 1,000 as-yet unfetched URLs in it.
  
  ===== Option 2.  Bootstrapping from an initial seed list. =====
  This option shadows the creation of the seed list as covered [[#A3._Crawl_your_first_website|here]].
@@ -169, +169 @@

  }}}
  Now the database contains both updated entries for all initial pages as well as new entries
that correspond to newly discovered pages linked from the initial set.
  
- Now we generate and fetch a new segment containing the top-scoring 1000 pages:
+ Now we generate and fetch a new segment containing the top-scoring 1,000 pages:
  
  {{{
  bin/nutch generate crawldb crawldb/segments -topN 1000
@@ -203, +203 @@

  
  == 4. Setup Solr for search ==
   * download binary file from [[http://www.apache.org/dyn/closer.cgi/lucene/solr/|here]]
-  * unzip to $HOME/apache-solr-3.X, we will now refer to this as ${APACHE_SOLR_HOME}
+  * unzip to `$HOME/apache-solr-3.X`, we will now refer to this as `${APACHE_SOLR_HOME}`
-  * cd ${APACHE_SOLR_HOME}/example
+  * `cd ${APACHE_SOLR_HOME}/example`
-  * java -jar start.jar
+  * `java -jar start.jar`
  
  == 5. Verify Solr installation ==
  After you started Solr admin console, you should be able to access the following links:
@@ -215, +215 @@

  http://localhost:8983/solr/admin/stats.jsp
  }}}
  == 6. Integrate Solr with Nutch ==
- We have both Nutch and Solr installed and setup correctly. And Nutch already created crawl
data from the seed url(s). Below are the steps to delegate searching to Solr for links to
be searchable:
+ We have both Nutch and Solr installed and setup correctly. And Nutch already created crawl
data from the seed URL(s). Below are the steps to delegate searching to Solr for links to
be searchable:
  
-  * cp ${NUTCH_RUNTIME_HOME}/conf/schema.xml ${APACHE_SOLR_HOME}/example/solr/conf/
+  * `cp ${NUTCH_RUNTIME_HOME}/conf/schema.xml ${APACHE_SOLR_HOME}/example/solr/conf/`
-  * restart Solr with the command “java -jar start.jar” under ${APACHE_SOLR_HOME}/example
+  * restart Solr with the command “`java -jar start.jar`” under `${APACHE_SOLR_HOME}/example`
   * run the Solr Index command:
  
  {{{
@@ -226, +226 @@

  }}}
  This will send all crawl data to Solr for indexing. For more information please see [[bin/nutch
solrindex]]
  
- If all has gone to plan, we are now ready to search with http://localhost:8983/solr/admin/.
 If you want to see the raw HTML indexed by Solr, change the content field definition in schema.xml
to:
+ If all has gone to plan, we are now ready to search with http://localhost:8983/solr/admin/.
 If you want to see the raw HTML indexed by Solr, change the content field definition in `schema.xml`
to:
  
  {{{
  <field name="content" type="text" stored="true" indexed="true"/>

Mime
View raw message