nutch-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Apache Wiki <wikidi...@apache.org>
Subject [Nutch Wiki] Trivial Update of "NutchTutorial" by LewisJohnMcgibbney
Date Fri, 02 Sep 2011 19:47:49 GMT
Dear Wiki user,

You have subscribed to a wiki page or wiki category on "Nutch Wiki" for change notification.

The "NutchTutorial" page has been changed by LewisJohnMcgibbney:
http://wiki.apache.org/nutch/NutchTutorial?action=diff&rev1=41&rev2=42

- '''''This tutorial deals with Nutch 1.3. For older versions, [[NutchTutorialPre1.3|visit
the pre-1.3 tutorial]].'''''
+ ## page was renamed from Nutch1.3WithSolrIntegration
+ ## page was renamed from Running Nutch 1.3 with Solr Integration
+ ## page was renamed from RunningNutchAndSolr
+ ## Lang: En
  
+ == Introduction ==
- == Requirements ==
-  1. Java 1.4.x, either from Sun or IBM on Linux is preferred. Set NUTCH_JAVA_HOME to the
root of your JVM installation. Nutch 0.9 requires Sun JDK 1.5 or higher.
-  1. Apache's Tomcat 5.x. or higher.
-  1. On Win32, cygwin, for shell support. (If you plan to use Subversion on Win32, be sure
to select the subversion package when you install, in the "Devel" category.)
-  1. Up to a gigabyte of free disk space, a high-speed connection, and an hour or so.
  
+ Apache Nutch is an open source web crawler written in Java. By using it, we can find web
page hyperlinks in an automated manner, reduce lots of maintenance work, for example checking
broken links, and create a copy of all the visited pages for searching over. That’s where
Apache Solr comes in. Solr is an open source full text search framework, with Solr we can
search the visited pages from Nutch. Luckily, integration between Nutch and Solr is pretty
straightforward as explained below.
- == Getting Started ==
- First, you need to get a copy of the Nutch code. You can download a release from http://www.apache.org/dyn/closer.cgi/nutch/.
Unpack the release and connect to its top-level directory. Or, check out the latest source
code from subversion and build it with Ant.
  
- Try the following command:
+ Apache Nutch release 1.3 has Solr integration embedded, greatly simplifying Nutch-Solr integration.
It also removes the legacy dependence upon both Apache Tomcat for running the old Nutch Web
Application and upon Apache Lucene for indexing. Just download a 1.3 binary release from [[http://www.apache.org/dyn/closer.cgi/nutch/|here]].
  
- {{{runtime/local/bin/nutch}}}
+ == Table of Contents ==
+ <<TableOfContents(3)>>
+  
+ == Steps ==
  
- This will display the documentation for the Nutch command script.
+ == 1 Setup Nutch from binary distribution ==
  
- Good! You are almost ready to crawl. You need to give your crawler a name. This is required.
+  * Unzip your binary Nutch package to $HOME/nutch-1.3
+  * cd $HOME/nutch-1.3/runtime/local 
  
-  1. Edit $NUTCH_HOME/runtime/local/conf/nutch-site.xml and add
+ From now on, we am going to use ${NUTCH_RUNTIME_HOME} to refer to the current directory.
  
+ == 2. Verify your Nutch installation ==
+  
+  * run "bin/nutch" - You can confirm a correct installation if you seeing the following:
+ {{{
+ Usage: nutch [-core] COMMAND
+ }}}
+ 
+ Some troubleshooting tips:
+  * Run the following command if you are seeing "Permission denied":
+ {{{
+ chmod +x bin/nutch
+ }}}
+  * Setup JAVA_HOME if you are seeing JAVA_HOME not set. On Mac, you can run the following
command or add it to ~/.bashrc:
+ {{{
+ export JAVA_HOME=/System/Library/Frameworks/JavaVM.framework/Versions/1.6/Home
+ }}}
+ 
+ == 3. Crawl your first website ==
+ 
+  *  Add your agent name in the value field of the http.agent.name property in conf/nutch-site.xml,
for example:
  {{{
  <property>
-   <name>http.agent.name</name>
+  <name>http.agent.name</name>
-   <value>YOUR_CRAWLER_NAME_HERE</value>
+  <value>My Nutch Spider</value>
  </property>
  }}}
+  * mkdir -p urls
+  * create a text file nutch under /urls with the following content (1 url per line for each
site you want Nutch to crawl).
+ {{{
+ http://nutch.apache.org/
+ }}}
- 
-  1. Replace YOUR_CRAWLER_NAME_HERE with the name you want to give to your crawler
-  1. Optionally you may also set the {{{http.agent.url}}} and {{{http.agent.email}}} properties
so that webmasters can identify who is crawling their site and contact you if necessary.
- 
- '''''Note''''' : It is advised to specify your parameters in the file nutch-site.xml and
leave nutch-default.xml as it is. The latter should be used as a reference only for checking
the list of available parameters and their descriptions.
- 
- Now we're ready to crawl. There are two approaches to crawling:
- 
-  1. Using the '''crawl''' command to perform all the crawl steps with a single command.
 This is sometimes referred to as '''Intranet Crawling'''.  Although a simple way to get started,
it has limitations.
-  1. Using the lower level inject, generate, fetch and updatedb commands.  Sometimes referred
to as '''Whole-Web Crawling''' this allows you more control of each step of the process and
is required to be able to update existing data.
- 
- == The Crawl Command ==
- The crawl command is more appropriate when you intend to crawl up to around one million
pages on a handful of web servers.
- 
- === Crawl Command: Configuration ===
- To configure things for the crawl command you must:
- 
-  * Create a directory with a flat file of root urls. For example, to crawl the nutch site
you might start with a file named urls/nutch containing the url of just the Nutch home page.
All other Nutch pages should be reachable from this page. The urls/nutch file would thus contain:
-  {{{ http://lucene.apache.org/nutch/ }}}
- 
-  * Edit the file conf/regex-urlfilter.txt and replace 
+ * Edit the file conf/regex-urlfilter.txt and replace 
- 
  {{{
  # accept anything else
  +.  
  }}}
  
- with a regular expression matching the domain you wish to crawl. For example, if you wished
to limit the crawl to the apache.org domain, the line should read:
+ with a regular expression matching the domain you wish to crawl. For example, if you wished
to limit the crawl to the nutch.apache.org domain, the line should read:
  
  {{{
-  +^http://([a-z0-9]*\.)*apache.org/ 
+  +^http://([a-z0-9]*\.)*nutch.apache.org/ 
  }}} 
  
- This will include any url in the domain apache.org.
+ This will include any url in the domain nutch.apache.org.
  
- === Crawl Command: Running the Crawl ===
- Once things are configured, running the crawl is easy. Just use the crawl command. Its options
include:
+ === 3.1 Using the Crawl Command ===
+ 
+ Now we are ready to initiate a crawl, use the following parameters:
  
   * '''-dir''' ''dir'' names the directory to put the crawl in.
   * '''-threads''' ''threads'' determines the number of threads that will fetch in parallel.
   * '''-depth''' ''depth'' indicates the link depth from the root page that should be crawled.
   * '''-topN''' ''N'' determines the maximum number of pages that will be retrieved at each
level up to the depth.
+  * Run the following command:
+ {{{
- 
- For example, a typical call might be:
- 
-  . {{{ bin/nutch crawl urls -dir crawl -depth 3 -topN 50 }}}
+ bin/nutch crawl urls -dir crawl -depth 3 -topN 5
+ }}}
+  * Now you should be able to see the following directories created:
+ {{{
+ crawl/crawldb 
+ Crawl/linkdb
+ crawl/segments
+ }}}
+ 
+ '''NOTE''': If you have a Solr core already set up and wish to index to it, you are required
to add the -solr <solrUrl> parameter to your crawl command e.g.
+ {{{
+ bin/nutch crawl urls -solr http://localhost:8983/solr/ -depth 3 -topN 5
+ }}}
+ If not then please skip to [[#4. Setup Solr for search|here]] for how to set up your Solr
instance and index your crawl data.
  
  Typically one starts testing one's configuration by crawling at shallow depths, sharply
limiting the number of pages fetched at each level (-topN), and watching the output to check
that desired pages are fetched and undesirable pages are not. Once one is confident of the
configuration, then an appropriate depth for a full crawl is around 10. The number of pages
per level (-topN) for a full crawl can be from tens of thousands to millions, depending on
your resources.
  
- Once crawling has completed, one can skip to the [[#Command_Line_Searching|Searching section]]
below.
+ === 3.2 Using Individual Commands for Whole-web Crawling ===
  
- == Step-by-Step or Whole-web Crawling ==
  Whole-web crawling is designed to handle very large crawls which may take weeks to complete,
running on multiple machines.  This also permits more control over the crawl process, and
incremental crawling.  It is important to note that whole web crawling does not necessarily
mean crawling the entire world wide web.  We can limit a whole web crawl to just a list of
the URLs we want to crawl.  This is done by using a filter just like we the one we used when
we did the crawl command (above).
  
- === Step-by-Step: Concepts ===
+ ==== Step-by-Step: Concepts ====
  Nutch data is composed of:
  
   1. The crawl database, or crawldb. This contains information about every url known to Nutch,
including whether it was fetched, and, if so, when.
-  1. The link database, or linkdb. This contains the list of known links to each url, including
both the source url and anchor text of the link.
+  2. The link database, or linkdb. This contains the list of known links to each url, including
both the source url and anchor text of the link.
-  1. A set of segments. Each segment is a set of urls that are fetched as a unit. Segments
are directories with the following subdirectories:
+  3. A set of segments. Each segment is a set of urls that are fetched as a unit. Segments
are directories with the following subdirectories:
    * a ''crawl_generate'' names a set of urls to be fetched
    * a ''crawl_fetch'' contains the status of fetching each url
    * a ''content'' contains the raw content retrieved from each url
    * a ''parse_text'' contains the parsed text of each url
    * a ''parse_data'' contains outlinks and metadata parsed from each url
    * a ''crawl_parse'' contains the outlink urls, used to update the crawldb
-  1. The indexes are Lucene-format indexes.
  
- === Step-by-Step: Seeding the Crawl DB with a list of URLS ===
+ ==== Step-by-Step: Seeding the Crawl DB with a list of URLS ====
- ==== Option 1:  Bootstrapping from the DMOZ database. ====
+ ===== Option 1:  Bootstrapping from the DMOZ database. =====
  The injector adds urls to the crawldb. Let's inject URLs from the DMOZ Open Directory. First
we must download and uncompress the file listing all of the DMOZ pages. (This is a 200+Mb
file, so this will take a few minutes.)
  
  {{{
@@ -111, +128 @@

  }}}
  The parser also takes a few minutes, as it must parse the full file. Finally, we initialize
the crawl db with the selected urls.
  
+ {{{ 
- {{{ bin/nutch inject crawl/crawldb dmoz }}}
+ bin/nutch inject crawldb dmoz 
+ }}}
  
  Now we have a web database with around 1000 as-yet unfetched URLs in it.
  
- ==== Option 2.  Bootstrapping from an initial seed list. ====
+ ===== Option 2.  Bootstrapping from an initial seed list. =====
- Instead of bootstrapping from DMOZ, we can create a text file called {{{urls}}}, this file
should have one url per line.  We can initialize the crawl db with the selected urls.
+ This option shadows the creation of the seed list as covered [[#3. Crawl your first website|here]].
  
+ {{{ 
- {{{ bin/nutch inject crawl/crawldb urls }}}
+ bin/nutch inject crawldb urls 
+ }}}
  
- ''NOTE: version 0.8 and higher requires that we put this file into a subdirectory, e.g.
{{{seed/urls}}} - in this case the command looks like this:''
- 
- {{{ bin/nutch inject crawl/crawldb seed }}}
- 
- === Step-by-Step: Fetching ===
+ ==== Step-by-Step: Fetching ====
- To fetch, we first generate a fetchlist from the database:
+ To fetch, we first generate a fetch list from the database:
  
+ {{{ 
- {{{ bin/nutch generate crawl/crawldb crawl/segments }}}
+ bin/nutch generate crawldb segments 
+ }}}
  
- This generates a fetchlist for all of the pages due to be fetched. The fetchlist is placed
in a newly created segment directory. The segment directory is named by the time it's created.
We save the name of this segment in the shell variable {{{s1}}}:
+ This generates a fetch list for all of the pages due to be fetched. The fetch list is placed
in a newly created segment directory. The segment directory is named by the time it's created.
We save the name of this segment in the shell variable {{{s1}}}:
  
  {{{
  s1=`ls -d crawl/segments/2* | tail -1`
@@ -137, +156 @@

  }}}
  Now we run the fetcher on this segment with:
  
+ {{{ 
- {{{ bin/nutch fetch $s1 }}}
+ bin/nutch fetch $s1 
+ }}}
  
  When this is complete, we update the database with the results of the fetch:
  
+ {{{ 
- {{{ bin/nutch updatedb crawl/crawldb $s1 }}}
+ bin/nutch updatedb crawldb $s1 
+ }}}
  
  Now the database contains both updated entries for all initial pages as well as new entries
that correspond to newly discovered pages linked from the initial set.
  
  Then we parse the entries:
  
+ {{{ 
- {{{ bin/nutch parse $1 }}}
+ bin/nutch parse $1 
+ }}}
  
  Now we generate and fetch a new segment containing the top-scoring 1000 pages:
  
  {{{
- bin/nutch generate crawl/crawldb crawl/segments -topN 1000
+ bin/nutch generate crawldb segments -topN 1000
- s2=`ls -d crawl/segments/2* | tail -1`
+ s2=`ls -d segments/2* | tail -1`
  echo $s2
  
  bin/nutch fetch $s2
- bin/nutch updatedb crawl/crawldb $s2
+ bin/nutch updatedb crawldb $s2
  bin/nutch parse $s2
  }}}
  Let's fetch one more round:
  
  {{{
- bin/nutch generate crawl/crawldb crawl/segments -topN 1000
+ bin/nutch generate crawldb segments -topN 1000
- s3=`ls -d crawl/segments/2* | tail -1`
+ s3=`ls -d segments/2* | tail -1`
  echo $s3
  
  bin/nutch fetch $s3
- bin/nutch updatedb crawl/crawldb $s3
+ bin/nutch updatedb crawldb $s3
  bin/nutch parse $s3
  }}}
  By this point we've fetched a few thousand pages. Let's index them!
  
- === Step-by-Step: Indexing ===
+ ==== Step-by-Step: Invertlinks ====
  Before indexing we first invert all of the links, so that we may index incoming anchor text
with the pages.
  
+ {{{ 
- {{{ bin/nutch invertlinks crawl/linkdb -dir crawl/segments }}}
+ bin/nutch invertlinks linkdb -dir segments 
+ }}}
  
- NOTE: the invertlinks command only applies to Nutch 0.8 and higher.
+ We are now ready to search with Apache Solr. 
  
- To index the segments we use the index command, as follows:
+ == 4. Setup Solr for search ==
  
+  * download binary file from [[http://www.apache.org/dyn/closer.cgi/lucene/solr/|here]]
+  * unzip to $HOME/apache-solr-3.X, we will now refer to this as ${APACHE_SOLR_HOME}
+  * cd ${APACHE_SOLR_HOME}/example
+  * java -jar start.jar
+ 
+ == 5. Verify Solr installation ==
+ 
+ After you started Solr admin console, you should be able to access the following links:
+ {{{
+ http://localhost:8983/solr/admin/
+ http://localhost:8983/solr/admin/stats.jsp
+ }}}
+ 
+ == 6. Integrate Solr with Nutch ==
+ 
+ We have both Nutch and Solr installed and setup correctly. And Nutch already created crawl
data from the seed url(s). Below are the steps to delegate searching to Solr for links to
be searchable:
+  * cp ${NUTCH_RUNTIME_HOME}/conf/schema.xml ${APACHE_SOLR_HOME}/example/solr/conf/ 
+  * restart Solr with the command “java -jar start.jar” under ${APACHE_SOLR_HOME}/example

+  * run the Solr Index command:
+ {{{
- {{{ bin/nutch solrindex http://localhost:8983/solr/ crawl/crawldb crawl/linkdb crawl/segments/*
}}}
+ bin/nutch solrindex http://127.0.0.1:8983/solr/ crawl/crawldb crawl/linkdb crawl/segments/*
+ }}}
+ This will send all crawl data to Solr for indexing. For more information please see bin/nutch
solrindex
+  
+ If all has gone to plan, we are now ready to search with http://localhost:8983/solr/admin/.
 If you want to see the raw HTML indexed by Solr, change the content field definition in solrconfig.xml
to:
+ {{{
+ <field name="content" type="text" stored="true" indexed="true"/>
+ }}}
  
- Now we're ready to search!
- 
- == Command Line Searching  ==
- '''''This section needs to be updated for Nutch 1.3. [[NutchTutorialPre1.3|Pre 1.3 tutorial
can be found here.]]'''''
- 
- == Installing in Tomcat ==
- '''Deprecated after Nutch 1.2, Nutch 1.3 uses Solr'''
- 

Mime
View raw message