knox-commits mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From kmin...@apache.org
Subject svn commit: r1529302 - in /incubator/knox: site/ site/books/knox-incubating-0-3-0/ trunk/books/0.3.0/ trunk/books/common/
Date Fri, 04 Oct 2013 21:04:08 GMT
Author: kminder
Date: Fri Oct  4 21:04:07 2013
New Revision: 1529302

URL: http://svn.apache.org/r1529302
Log:
Finish WebHDFS section.

Modified:
    incubator/knox/site/books/knox-incubating-0-3-0/knox-incubating-0-3-0.html
    incubator/knox/site/index.html
    incubator/knox/site/issue-tracking.html
    incubator/knox/site/license.html
    incubator/knox/site/mail-lists.html
    incubator/knox/site/project-info.html
    incubator/knox/site/team-list.html
    incubator/knox/trunk/books/0.3.0/book.md
    incubator/knox/trunk/books/0.3.0/book_getting-started.md
    incubator/knox/trunk/books/0.3.0/book_service-details.md
    incubator/knox/trunk/books/0.3.0/service_hbase.md
    incubator/knox/trunk/books/0.3.0/service_hive.md
    incubator/knox/trunk/books/0.3.0/service_oozie.md
    incubator/knox/trunk/books/0.3.0/service_webhcat.md
    incubator/knox/trunk/books/0.3.0/service_webhdfs.md
    incubator/knox/trunk/books/common/header.md

Modified: incubator/knox/site/books/knox-incubating-0-3-0/knox-incubating-0-3-0.html
URL: http://svn.apache.org/viewvc/incubator/knox/site/books/knox-incubating-0-3-0/knox-incubating-0-3-0.html?rev=1529302&r1=1529301&r2=1529302&view=diff
==============================================================================
--- incubator/knox/site/books/knox-incubating-0-3-0/knox-incubating-0-3-0.html (original)
+++ incubator/knox/site/books/knox-incubating-0-3-0/knox-incubating-0-3-0.html Fri Oct  4 21:04:07 2013
@@ -26,8 +26,8 @@
     <li><a href="#Verify">Verify</a></li>
     <li><a href="#Install">Install</a></li>
     <li><a href="#Supported+Services">Supported Services</a></li>
-    <li><a href="#Basic+Usage">Basic Usage</a></li>
     <li><a href="#Sandbox+Configuration">Sandbox Configuration</a></li>
+    <li><a href="#Basic+Usage">Basic Usage</a></li>
   </ul></li>
   <li><a href="#Gateway+Details">Gateway Details</a>
   <ul>
@@ -228,7 +228,7 @@
       <td><img src="question.png"  alt="?"/> </td>
     </tr>
   </tbody>
-</table><h3><a id="Basic+Usage"></a>Basic Usage</h3><p>The steps described below are intended to get the Knox Gateway server up and running in its default configuration. Once that is accomplished a very simple example of using the gateway to interact with a Hadoop cluster is provided. More detailed configuration information is provided in the <a href="#Gateway+Details">Gateway Details</a> section. More detailed examples for using each Hadoop service can be found in the <a href="#Service+Details">Service Details</a> section.</p><p>Note that *nix conventions are used throughout this section but in general the Windows alternative should be obvious. In situations where this is not the case a Windows alternative will be provided.</p><h4><a id="Starting+Servers"></a>Starting Servers</h4><h5><a id="1.+Enter+the+`{GATEWAY_HOME}`+directory"></a>1. Enter the <code>{GATEWAY_HOME}</code> directory</h5>
+</table><h3><a id="Sandbox+Configuration"></a>Sandbox Configuration</h3><p>TODO</p><h3><a id="Basic+Usage"></a>Basic Usage</h3><p>The steps described below are intended to get the Knox Gateway server up and running in its default configuration. Once that is accomplished a very simple example of using the gateway to interact with a Hadoop cluster is provided. More detailed configuration information is provided in the <a href="#Gateway+Details">Gateway Details</a> section. More detailed examples for using each Hadoop service can be found in the <a href="#Service+Details">Service Details</a> section.</p><p>Note that *nix conventions are used throughout this section but in general the Windows alternative should be obvious. In situations where this is not the case a Windows alternative will be provided.</p><h4><a id="Starting+Servers"></a>Starting Servers</h4><h5><a id="1.+Enter+the+`{GATEWAY_HOME}`+directory"></a>1. Enter the <code>{GATEWAY_HOME}</code> directory</h5>
 <pre><code>cd knox-incubation-{VERSION}
 </code></pre><p>The fully qualified name of this directory will be referenced as <code>{GATEWAY_HOME}</code> throughout this document.</p><h5><a id="2.+Start+the+demo+LDAP+server+(ApacheDS)"></a>2. Start the demo LDAP server (ApacheDS)</h5><p>First, understand that the LDAP server provided here is for demonstration purposes. You may configure the gateway to utilize other LDAP systems via the topology descriptor. This is described in step 5 below. The assumption is that most users will leverage the demo LDAP server while evaluating this release and should therefore continue with the instructions here in step 3.</p><p>Edit <code>{GATEWAY_HOME}/conf/users.ldif</code> if required and add any desired users and groups to the file. A sample end user &ldquo;guest&rdquo; has been already included. Note that the passwords in this file are &ldquo;fictitious&rdquo; and have nothing to do with the actual accounts on the Hadoop cluster you are using. There is also a copy of this file in the templ
 ates directory that you can use to start over if necessary. This file is only used by the demo LDAP server.</p><p>Start the LDAP server - pointing it to the config dir where it will find the <code>users.ldif</code> file in the conf directory.</p>
 <pre><code>java -jar bin/ldap.jar conf &amp;
@@ -1251,71 +1251,195 @@ dep/commons-codec-1.7.jar
     <li>JSON Path <a href="https://code.google.com/p/json-path/">API</a></li>
     <li>GPath <a href="http://groovy.codehaus.org/GPath">Overview</a></li>
   </ul></li>
-</ul><h2><a id="Service+Details"></a>Service Details</h2><p>TODO - Service details overview</p><h3><a id="WebHDFS"></a>WebHDFS</h3><p>TODO</p><h4><a id="WebHDFS+URL+Mapping"></a>WebHDFS URL Mapping</h4><p>TODO</p><h4><a id="WebHDFS+Examples"></a>WebHDFS Examples</h4><p>TODO</p><h4><a id="Assumptions"></a>Assumptions</h4><p>This document assumes a few things about your environment in order to simplify the examples.</p>
+</ul><h2><a id="Service+Details"></a>Service Details</h2><p>In the sections that follow the integrations currently available out of the box with the gateway will be described. In general these sections will include examples that demonstrate how to access each of these services via the gateway. In many cases this will include both the use of <a href="http://curl.haxx.se/">cURL</a> as a REST API client as well as the use of the Knox Client DSL. You may notice that there are some minor differences between using the REST API of a given service via the gateway. In general this is necessary in order to achieve the goal of leaking internal Hadoop cluster details to the client.</p><p>Keep in mind that the gateway uses a plugin model for supporting Hadoop services. Check back with a the <a href="http://knox.incubator.apache.org">Apache Knox</a> site for the latest news on plugin availability. You can also create your own custom plugin to extend the capabilities of the gateway.</p><h3><a id="
 Assumptions"></a>Assumptions</h3><p>This document assumes a few things about your environment in order to simplify the examples.</p>
 <ul>
   <li>The JVM is executable as simply java.</li>
   <li>The Apache Knox Gateway is installed and functional.</li>
   <li>The example commands are executed within the context of the GATEWAY_HOME current directory. The GATEWAY_HOME directory is the directory within the Apache Knox Gateway installation that contains the README file and the bin, conf and deployments directories.</li>
-  <li>A few examples optionally require the use of commands from a standard Groovy installation. These examples are optional but to try them you will need Groovy [installed|http://groovy.codehaus.org/Installing+Groovy].</li>
-</ul><p>h2. Customization</p><p>These examples may need to be tailored to the execution environment. In particular hostnames and ports may need to be changes to match your environment. In particular there are two example files in the distribution that may need to be customized. Take a moment to review these files. All of the values that may need to be customized can be found together at the top of each file.</p>
-<ul>
-  <li>samples/ExampleWebHDFS.groovy</li>
-</ul><h4><a id="WebHDFS+via+KnoxShell+DSL"></a>WebHDFS via KnoxShell DSL</h4><p>You can use the Groovy interpreter provided with the distribution.</p>
-<pre><code>java -jar bin/shell.jar samples/ExampleWebHDFS.groovy
-</code></pre><p>You can manually type in the KnoxShell DSL script into the interactive Groovy interpreter provided with the distribution.</p>
+  <li>The <a href="http://curl.haxx.se/">cURL</a> command line HTTP client utility is installed and functional.</li>
+  <li>A few examples optionally require the use of commands from a standard Groovy installation. These examples are optional but to try them you will need Groovy <a href="http://groovy.codehaus.org/Installing+Groovy">installed</a>.</li>
+  <li>The default configuration for all of the samples is setup for use with Hortonworks&rsquo; <a href="http://hortonworks.com/products/hortonworks-sandbox">Sandbox</a> version 2.</li>
+</ul><h3><a id="Customization"></a>Customization</h3><p>Using these samples with other Hadoop installations will require changes to the steps describe here as well as changes to referenced sample scripts. This will also likely require changes to the gateway&rsquo;s default configuration. In particular host names, ports user names and password may need to be changes to match your environment. These changes may need to be made to gateway configuration and also the Groovy sample script files in the distribution. All of the values that may need to be customized in the sample scripts can be found together at the top of each of these files.</p><h3><a id="cURL"></a>cURL</h3><p>The cURL HTTP client command line utility is used extensively in the examples for each service. In particular this form of the cURL command line is used repeatedly.</p>
+<pre><code>curl -i -k -u guest:guest-password ...
+</code></pre><p>The option -i (aka &ndash;include) is used to output HTTP response header information. This will be important when the content of the HTTP Location header is required for subsequent requests.</p><p>The option -k (aka &ndash;insecure) is used to avoid any issues resulting the use of demonstration SSL certificates.</p><p>The option -u (aka &ndash;user) is used to provide the credentials to be used when the client is challenged by the gateway.</p><p>Keep in mind that the samples do not use the cookie features of cURL for the sake of simplicity. Therefore each request via cURL will result in an authentication.</p><h3><a id="WebHDFS"></a>WebHDFS</h3><p>REST API access to HDFS in a Hadoop cluster is provided by WebHDFS. The <a href="http://hadoop.apache.org/docs/stable/webhdfs.html">WebHDFS REST API</a> documentation is available online. WebHDFS must be enabled in the hdfs-site.xml configuration file. In sandbox this configuration file is located at /etc/hadoop/conf/hdfs-s
 ite.xml. Note the properties shown below as they are related to configuration required by the gateway. Some of these represent the default values and may not actually be present in hdfs-site.xml.</p>
+<pre><code>&lt;property&gt;
+    &lt;name&gt;dfs.webhdfs.enabled&lt;/name&gt;
+    &lt;value&gt;true&lt;/value&gt;
+&lt;/property&gt;
+&lt;property&gt;
+    &lt;name&gt;dfs.namenode.rpc-address&lt;/name&gt;
+    &lt;value&gt;sandbox.hortonworks.com:8020&lt;/value&gt;
+&lt;/property&gt;
+&lt;property&gt;
+    &lt;name&gt;dfs.namenode.http-address&lt;/name&gt;
+    &lt;value&gt;sandbox.hortonworks.com:50070&lt;/value&gt;
+&lt;/property&gt;
+&lt;property&gt;
+    &lt;name&gt;dfs.https.namenode.https-address&lt;/name&gt;
+    &lt;value&gt;sandbox.hortonworks.com:50470&lt;/value&gt;
+&lt;/property&gt;
+</code></pre><p>The values above need to be reflected in each topology descriptor file deployed to the gateway. The gateway by default includes a sample topology descriptor file <code>{GATEWAY_HOME}/deployments/sandbox.xml</code>. The values in this sample are configured to work with an installed Sandbox VM.</p>
+<pre><code>&lt;service&gt;
+    &lt;role&gt;NAMENODE&lt;/role&gt;
+    &lt;url&gt;hdfs://localhost:8020&lt;/url&gt;
+&lt;/service&gt;
+&lt;service&gt;
+    &lt;role&gt;WEBHDFS&lt;/role&gt;
+    &lt;url&gt;http://localhost:50070/webhdfs&lt;/url&gt;
+&lt;/service&gt;
+</code></pre><p>The URL provided for the role NAMENODE does not result in an endpoint being exposed by the gateway. This information is only required so that other URLs can be rewritten that reference the Name Node&rsquo;s RPC address. This prevents clients from needed to be aware of the internal cluster details.</p><p>By default the gateway is configured to use the HTTP endpoint for WebHDFS in the Sandbox. This could alternatively be configured to use the HTTPS endpoint by provided the correct address.</p><h4><a id="WebHDFS+URL+Mapping"></a>WebHDFS URL Mapping</h4><p>For Name Node URLs, the mapping of Knox Gateway accessible WebHDFS URLs to direct WebHDFS URLs is simple.</p>
+<table>
+  <tbody>
+    <tr>
+      <td>Gateway </td>
+      <td><code>https://{gateway-host}:{gateway-port}/{gateway-path}/{cluster-name}/webhdfs</code> </td>
+    </tr>
+    <tr>
+      <td>Cluster </td>
+      <td><code>http://{webhdfs-host}:50070/webhdfs</code> </td>
+    </tr>
+  </tbody>
+</table><p>However, there is a subtle difference to URLs that are returned by WebHDFS in the Location header of many requests. Direct WebHDFS requests may return Location headers that contain the address of a particular Data Node. The gateway will rewrite these URLs to ensure subsequent requests come back through the gateway and internal cluster details are protected.</p><p>A WebHDFS request to the Node Node to retrieve a file will return a URL of the form below in the Location header.</p>
+<pre><code>http://{datanode-host}:{data-node-port}/webhdfs/v1/{path}?...
+</code></pre><p>Note that this URL contains the newtwork location of a Data Node. The gateway will rewrite this URL to look like the URL below.</p>
+<pre><code>https://{gateway-host}:{gateway-port}/{gateway-path}/{custer-name}/webhdfs/data/v1/{path}?_={encrypted-query-parameters}
+</code></pre><p>The <code>{encrypted-query-parameters}</code> will contain the <code>{datanode-host}</code> and <code>{datanode-port}</code> information. This information along with the original query parameters are encrypted so that the internal Hadoop details are protected.</p><h4><a id="WebHDFS+Examples"></a>WebHDFS Examples</h4><p>The examples below upload a file, download the file and list the contents of the directory.</p><h5><a id="WebHDFS+via+client+DSL"></a>WebHDFS via client DSL</h5><p>You can use the Groovy example scripts and interpreter provided with the distribution.</p>
+<pre><code>java -jar bin/shell.jar samples/ExampleWebHfsPutGet.groovy
+java -jar bin/shell.jar samples/ExampleWebHfsLs.groovy
+</code></pre><p>You can manually type the client DSL script into the KnoxShell interactive Groovy interpreter provided with the distribution. The command below starts the KnoxShell in interactive mode.</p>
 <pre><code>java -jar bin/shell.jar
-</code></pre><p>Each line from the file below will need to be typed or copied into the interactive shell.</p><h5><a id="samples/ExampleHdfs.groovy"></a>samples/ExampleHdfs.groovy</h5>
-<pre><code>import groovy.json.JsonSlurper
+</code></pre><p>Each line below could be typed or copied into the interactive shell and executed. This is provided as an example to illustrate the use of the client DSL.</p>
+<pre><code># Import the client DSL and a useful utilities for working with JSON.
 import org.apache.hadoop.gateway.shell.Hadoop
 import org.apache.hadoop.gateway.shell.hdfs.Hdfs
+import groovy.json.JsonSlurper
 
+# Setup some basic config.
 gateway = &quot;https://localhost:8443/gateway/sandbox&quot;
-username = &quot;bob&quot;
-password = &quot;bob-password&quot;
-dataFile = &quot;README&quot;
+username = &quot;guest&quot;
+password = &quot;guest-password&quot;
 
+# Start the session.
 session = Hadoop.login( gateway, username, password )
-Hdfs.rm( session ).file( &quot;/tmp/example&quot; ).recursive().now()
-Hdfs.put( session ).file( dataFile ).to( &quot;/tmp/example/README&quot; ).now()
-text = Hdfs.ls( session ).dir( &quot;/tmp/example&quot; ).now().string
+
+# Cleanup anything leftover from a previous run.
+Hdfs.rm( session ).file( &quot;/user/guest/example&quot; ).recursive().now()
+
+# Upload the README to HDFS.
+Hdfs.put( session ).file( README ).to( &quot;/user/guest/example/README&quot; ).now()
+
+# Download the README from HDFS.
+text = Hdfs.get( session ).from( &quot;/user/guest/example/README&quot; ).now().string
+println text
+
+# List the contents of the directory.
+text = Hdfs.ls( session ).dir( &quot;/user/guest/example&quot; ).now().string
 json = (new JsonSlurper()).parseText( text )
 println json.FileStatuses.FileStatus.pathSuffix
-text = Hdfs.get( session ).from( &quot;/tmp/example/README&quot; ).now().string
-println text
-Hdfs.rm( session ).file( &quot;/tmp/example&quot; ).recursive().now()
-session.shutdown()
-</code></pre><h4><a id="WebHDFS+via+cURL"></a>WebHDFS via cURL</h4>
-<pre><code># 1. Optionally cleanup the sample directory in case a previous example was run without cleaning up.
-curl -i -k -u bob:bob-password -X DELETE \
-    &#39;https://localhost:8443/gateway/sandbox/webhdfs/v1/tmp/test?op=DELETE&amp;recursive=true&#39;
 
-# 2. Create the inode for a sample input file readme.txt in /tmp/test/input.
-curl -i -k -u bob:bob-password -X PUT \
-    &#39;https://localhost:8443/gateway/sandbox/webhdfs/v1/tmp/test/input/README?op=CREATE&#39;
+# Cleanup the directory.
+Hdfs.rm( session ).file( &quot;/user/guest/example&quot; ).recursive().now()
 
-# 3. Upload readme.txt to /tmp/test/input.  Use the readme.txt in {GATEWAY_HOME}.
-# The sample below uses this README file found in {GATEWAY_HOME}.
-curl -i -k -u bob:bob-password -T README -X PUT \
+# Clean the session.
+session.shutdown()
+</code></pre><h5><a id="WebHDFS+via+cURL"></a>WebHDFS via cURL</h5><p>Use can use cURL to directly invoke the REST APIs via the gateway.</p><h6><a id="Optionally+cleanup+the+sample+directory+in+case+a+previous+example+was+run+without+cleaning+up."></a>Optionally cleanup the sample directory in case a previous example was run without cleaning up.</h6>
+<pre><code>curl -i -k -u guest:guest-password -X DELETE \
+    &#39;https://localhost:8443/gateway/sandbox/webhdfs/v1/user/guest/example?op=DELETE&amp;recursive=true&#39;
+</code></pre><h6><a id="Register+the+name+for+a+sample+file+README+in+/user/guest/example."></a>Register the name for a sample file README in /user/guest/example.</h6>
+<pre><code>curl -i -k -u guest:guest-password -X PUT \
+    &#39;https://localhost:8443/gateway/sandbox/webhdfs/v1/user/guest/example/README?op=CREATE&#39;
+</code></pre><h6><a id="Upload+README+to+/user/guest/example.++Use+the+README+in+{GATEWAY_HOME}."></a>Upload README to /user/guest/example. Use the README in {GATEWAY_HOME}.</h6>
+<pre><code>curl -i -k -u guest:guest-password -T README -X PUT \
     &#39;{Value of Location header from command above}&#39;
-
-# 4. List the contents of the output directory /tmp/test/output
-curl -i -k -u bob:bob-password -X GET \
-    &#39;https://localhost:8443/gateway/sandbox/webhdfs/v1/tmp/test/input?op=LISTSTATUS&#39;
-
-# 5. Optionally cleanup the test directory
-curl -i -k -u bob:bob-password -X DELETE \
-    &#39;https://localhost:8443/gateway/sandbox/webhdfs/v1/tmp/test?op=DELETE&amp;recursive=true&#39;
-</code></pre><h3><a id="WebHCat"></a>WebHCat</h3><p>TODO</p><h4><a id="WebHCat+URL+Mapping"></a>WebHCat URL Mapping</h4><p>TODO</p><h4><a id="WebHCat+Examples"></a>WebHCat Examples</h4><p>TODO</p><h4><a id="Assumptions"></a>Assumptions</h4><p>This document assumes a few things about your environment in order to simplify the examples.</p>
+</code></pre><h6><a id="List+the+contents+of+the+directory+/user/guest/example."></a>List the contents of the directory /user/guest/example.</h6>
+<pre><code>curl -i -k -u guest:guest-password -X GET \
+    &#39;https://localhost:8443/gateway/sandbox/webhdfs/v1/user/guest/example?op=LISTSTATUS&#39;
+</code></pre><h6><a id="Request+the+content+of+the+README+file+in+/user/guest/example."></a>Request the content of the README file in /user/guest/example.</h6>
+<pre><code>curl -i -k -u guest:guest-password -X GET \
+    &#39;https://localhost:8443/gateway/sandbox/webhdfs/v1/user/guest/example/README?op=OPEN&#39;
+</code></pre><h6><a id="Read+the+content+of+the+file."></a>Read the content of the file.</h6>
+<pre><code>curl -i -k -u guest:guest-password -X GET \
+    &#39;{Value of Location header from command above}&#39;
+</code></pre><h6><a id="Optionally+cleanup+the+example+directory."></a>Optionally cleanup the example directory.</h6>
+<pre><code>curl -i -k -u guest:guest-password -X DELETE \
+    &#39;https://localhost:8443/gateway/sandbox/webhdfs/v1/user/guest/example?op=DELETE&amp;recursive=true&#39;
+</code></pre><h5><a id="WebHDFS+client+DSL"></a>WebHDFS client DSL</h5><h6><a id="get+-+Get+a+file+from+HDFS+(OPEN)."></a>get - Get a file from HDFS (OPEN).</h6>
 <ul>
-  <li>The JVM is executable as simply java.</li>
-  <li>The Apache Knox Gateway is installed and functional.</li>
-  <li>The example commands are executed within the context of the GATEWAY_HOME current directory. The GATEWAY_HOME directory is the directory within the Apache Knox Gateway installation that contains the README file and the bin, conf and deployments directories.</li>
-  <li>A few examples optionally require the use of commands from a standard Groovy installation. These examples are optional but to try them you will need Groovy [installed|http://groovy.codehaus.org/Installing+Groovy].</li>
-</ul><h4><a id="Customization"></a>Customization</h4><p>These examples may need to be tailored to the execution environment. In particular hostnames and ports may need to be changes to match your environment. In particular there are two example files in the distribution that may need to be customized. Take a moment to review these files. All of the values that may need to be customized can be found together at the top of each file.</p>
+  <li>Request
+  <ul>
+    <li>from( String name ) - The full name of the file in HDFS.</li>
+    <li>file( String name ) - The name name of a local file to create with the content.</li>
+  </ul></li>
+  <li>Response
+  <ul>
+    <li>BasicResponse</li>
+    <li>If file parameter specified content will be streamed to file.</li>
+  </ul></li>
+  <li>Example
+  <ul>
+    <li><code>Hdfs.get( session ).from( &quot;/user/guest/example/README&quot; ).now().string</code></li>
+  </ul></li>
+</ul><h6><a id="ls+-+Query+the+contents+of+a+directory+(LISTSTATUS)"></a>ls - Query the contents of a directory (LISTSTATUS)</h6>
+<ul>
+  <li>Request
+  <ul>
+    <li>dir( String name ) - The full name of the directory in HDFS.</li>
+  </ul></li>
+  <li>Response
+  <ul>
+    <li>BasicResponse</li>
+  </ul></li>
+  <li>Example
+  <ul>
+    <li><code>Hdfs.ls( session ).dir( &quot;/user/guest/example&quot; ).now().string</code></li>
+  </ul></li>
+</ul><h6><a id="mkdir+-+Create+a+directory+in+HDFS+(MKDIRS)"></a>mkdir - Create a directory in HDFS (MKDIRS)</h6>
+<ul>
+  <li>Request
+  <ul>
+    <li>dir( String name ) - The full name of the directory to create in HDFS.</li>
+    <li>perm( String perm ) - The permissions for the directory (e.g. 644).</li>
+  </ul></li>
+  <li>Response
+  <ul>
+    <li>BasicResponse</li>
+  </ul></li>
+  <li>Example
+  <ul>
+    <li><code>Hdfs.mkdir( session ).dir( &quot;/user/guest/example&quot; ).now()</code></li>
+  </ul></li>
+</ul><h6><a id="put+-+Write+a+file+into+HDFS+(CREATE)"></a>put - Write a file into HDFS (CREATE)</h6>
+<ul>
+  <li>Request
+  <ul>
+    <li>text( String text ) - Text to upload to HDFS. Takes precidence over file if both present.</li>
+    <li>file( String name ) - The name of a local file to upload to HDFS.</li>
+    <li>to( String name ) - The fully qualified name to create in HDFS.</li>
+  </ul></li>
+  <li>Response
+  <ul>
+    <li>BasicResponse</li>
+  </ul></li>
+  <li>Example
+  <ul>
+    <li><code>Hdfs.put( session ).file( README ).to( &quot;/user/guest/example/README&quot; ).now()</code></li>
+  </ul></li>
+</ul><h6><a id="rm+-+Delete+a+file+or+directory+(DELETE)"></a>rm - Delete a file or directory (DELETE)</h6>
 <ul>
-  <li>samples/ExampleSubmitJob.groovy</li>
-  <li>samples/ExampleSubmitWorkflow.groovy</li>
-</ul><p>If you are using the Sandbox VM for your Hadoop cluster you may want to review [these configuration tips|Sandbox Configuration].</p><h4><a id="Example+#1:+WebHDFS+&+Templeton/WebHCat+via+KnoxShell+DSL"></a>Example #1: WebHDFS &amp; Templeton/WebHCat via KnoxShell DSL</h4><p>This example will submit the familiar WordCount Java MapReduce job to the Hadoop cluster via the gateway using the KnoxShell DSL. There are several ways to do this depending upon your preference.</p><p>You can use the &ldquo;embedded&rdquo; Groovy interpreter provided with the distribution.</p>
+  <li>Request
+  <ul>
+    <li>file( String name ) - The fully qualified file or directory name in HDFS.</li>
+    <li>recursive( Boolean recursive ) - Delete directory and all of its contents if True.</li>
+  </ul></li>
+  <li>Response
+  <ul>
+    <li>BasicResponse</li>
+  </ul></li>
+  <li>Example
+  <ul>
+    <li><code>Hdfs.rm( session ).file( &quot;/user/guest/example&quot; ).recursive().now()</code></li>
+  </ul></li>
+</ul><h3><a id="WebHCat"></a>WebHCat</h3><p>TODO</p><h4><a id="WebHCat+URL+Mapping"></a>WebHCat URL Mapping</h4><p>TODO</p><h4><a id="WebHCat+Examples"></a>WebHCat Examples</h4><p>TODO</p><h4><a id="Example+#1:+WebHDFS+&+Templeton/WebHCat+via+KnoxShell+DSL"></a>Example #1: WebHDFS &amp; Templeton/WebHCat via KnoxShell DSL</h4><p>This example will submit the familiar WordCount Java MapReduce job to the Hadoop cluster via the gateway using the KnoxShell DSL. There are several ways to do this depending upon your preference.</p><p>You can use the &ldquo;embedded&rdquo; Groovy interpreter provided with the distribution.</p>
 <pre><code>java -jar bin/shell.jar samples/ExampleSubmitJob.groovy
 </code></pre><p>You can manually type in the KnoxShell DSL script into the &ldquo;embedded&rdquo; Groovy interpreter provided with the distribution.</p>
 <pre><code>java -jar bin/shell.jar
@@ -1364,17 +1488,7 @@ println &quot;Done &quot; + done
 println &quot;Shutdown &quot; + hadoop.shutdown( 10, SECONDS )
 
 exit
-</code></pre><h3><a id="Oozie"></a>Oozie</h3><p>TODO</p><h4><a id="Oozie+URL+Mapping"></a>Oozie URL Mapping</h4><p>TODO</p><h4><a id="Oozie+Examples"></a>Oozie Examples</h4><p>TODO</p><h5><a id="Assumptions"></a>Assumptions</h5><p>This document assumes a few things about your environment in order to simplify the examples.</p>
-<ul>
-  <li>The JVM is executable as simply java.</li>
-  <li>The Apache Knox Gateway is installed and functional.</li>
-  <li>The example commands are executed within the context of the GATEWAY_HOME current directory. The GATEWAY_HOME directory is the directory within the Apache Knox Gateway installation that contains the README file and the bin, conf and deployments directories.</li>
-  <li>A few examples optionally require the use of commands from a standard Groovy installation. These examples are optional but to try them you will need Groovy <a href="http://groovy.codehaus.org/Installing+Groovy">installed</a>.</li>
-</ul><h4><a id="Customization"></a>Customization</h4><p>These examples may need to be tailored to the execution environment. In particular hostnames and ports may need to be changes to match your environment. In particular there are two example files in the distribution that may need to be customized. Take a moment to review these files. All of the values that may need to be customized can be found together at the top of each file.</p>
-<ul>
-  <li>samples/ExampleSubmitJob.groovy</li>
-  <li>samples/ExampleSubmitWorkflow.groovy</li>
-</ul><p>If you are using the Sandbox VM for your Hadoop cluster you may want to review <a href="#Sandbox+Configuration">Sandbox Configuration</a>.</p><h4><a id="Example+#2:+WebHDFS+&+Oozie+via+KnoxShell+DSL"></a>Example #2: WebHDFS &amp; Oozie via KnoxShell DSL</h4><p>This example will also submit the familiar WordCount Java MapReduce job to the Hadoop cluster via the gateway using the KnoxShell DSL. However in this case the job will be submitted via a Oozie workflow. There are several ways to do this depending upon your preference.</p><p>You can use the &ldquo;embedded&rdquo; Groovy interpreter provided with the distribution.</p>
+</code></pre><h3><a id="Oozie"></a>Oozie</h3><p>TODO</p><h4><a id="Oozie+URL+Mapping"></a>Oozie URL Mapping</h4><p>TODO</p><h4><a id="Oozie+Examples"></a>Oozie Examples</h4><p>TODO</p><h4><a id="Example+#2:+WebHDFS+&+Oozie+via+KnoxShell+DSL"></a>Example #2: WebHDFS &amp; Oozie via KnoxShell DSL</h4><p>This example will also submit the familiar WordCount Java MapReduce job to the Hadoop cluster via the gateway using the KnoxShell DSL. However in this case the job will be submitted via a Oozie workflow. There are several ways to do this depending upon your preference.</p><p>You can use the &ldquo;embedded&rdquo; Groovy interpreter provided with the distribution.</p>
 <pre><code>java -jar bin/shell.jar samples/ExampleSubmitWorkflow.groovy
 </code></pre><p>You can manually type in the KnoxShell DSL script into the &ldquo;embedded&rdquo; Groovy interpreter provided with the distribution.</p>
 <pre><code>java -jar bin/shell.jar
@@ -1560,13 +1674,7 @@ curl -i -k -u bob:bob-password -X GET \
 # 11. Optionally cleanup the test directory
 curl -i -k -u bob:bob-password -X DELETE \
     &#39;https://localhost:8443/gateway/sandbox/webhdfs/v1/tmp/test?op=DELETE&amp;recursive=true&#39;
-</code></pre><h3><a id="HBase"></a>HBase</h3><p>TODO</p><h4><a id="HBase+URL+Mapping"></a>HBase URL Mapping</h4><p>TODO</p><h4><a id="HBase+Examples"></a>HBase Examples</h4><p>TODO</p><p>The examples below illustrate the set of basic operations with HBase instance using Stargate REST API. Use following link to get more more details about HBase/Stargate API: <a href="http://wiki.apache.org/hadoop/Hbase/Stargate">http://wiki.apache.org/hadoop/Hbase/Stargate</a>.</p><h3><a id="Assumptions"></a>Assumptions</h3><p>This document assumes a few things about your environment in order to simplify the examples.</p>
-<ol>
-  <li>The JVM is executable as simply java.</li>
-  <li>The Apache Knox Gateway is installed and functional.</li>
-  <li>The example commands are executed within the context of the GATEWAY_HOME current directory. The GATEWAY_HOME directory is the directory within the Apache Knox Gateway installation that contains the README file and the bin, conf and deployments directories.</li>
-  <li>A few examples optionally require the use of commands from a standard Groovy installation. These examples are optional but to try them you will need Groovy [installed|http://groovy.codehaus.org/Installing+Groovy].</li>
-</ol><h3><a id="HBase+Stargate+Setup"></a>HBase Stargate Setup</h3><h4><a id="Launch+Stargate"></a>Launch Stargate</h4><p>The command below launches the Stargate daemon on port 60080</p>
+</code></pre><h3><a id="HBase"></a>HBase</h3><p>TODO</p><h4><a id="HBase+URL+Mapping"></a>HBase URL Mapping</h4><p>TODO</p><h4><a id="HBase+Examples"></a>HBase Examples</h4><p>TODO</p><p>The examples below illustrate the set of basic operations with HBase instance using Stargate REST API. Use following link to get more more details about HBase/Stargate API: <a href="http://wiki.apache.org/hadoop/Hbase/Stargate">http://wiki.apache.org/hadoop/Hbase/Stargate</a>.</p><h3><a id="HBase+Stargate+Setup"></a>HBase Stargate Setup</h3><h4><a id="Launch+Stargate"></a>Launch Stargate</h4><p>The command below launches the Stargate daemon on port 60080</p>
 <pre><code>sudo /usr/lib/hbase/bin/hbase-daemon.sh start rest -p 60080
 </code></pre><p>60080 post is used because it was specified in sample Hadoop cluster deployment <code>{GATEWAY_HOME}/deployments/sandbox.xml</code>.</p><h4><a id="Configure+Sandbox+port+mapping+for+VirtualBox"></a>Configure Sandbox port mapping for VirtualBox</h4>
 <ol>
@@ -1578,7 +1686,7 @@ curl -i -k -u bob:bob-password -X DELETE
   <li>Press Plus button to insert new rule: Name=Stargate, Host Port=60080, Guest Port=60080</li>
   <li>Press OK to close the rule window</li>
   <li>Press OK to Network window save the changes</li>
-</ol><p>60080 post is used because it was specified in sample Hadoop cluster deployment <code>{GATEWAY_HOME}/deployments/sandbox.xml</code>.</p><h3><a id="HBase/Stargate+via+KnoxShell+DSL"></a>HBase/Stargate via KnoxShell DSL</h3><h4><a id="Usage"></a>Usage</h4><p>For more details about client DSL usage please follow this [page|https://cwiki.apache.org/confluence/display/KNOX/Client+Usage].</p><h5><a id="systemVersion()+-+Query+Software+Version."></a>systemVersion() - Query Software Version.</h5>
+</ol><p>60080 post is used because it was specified in sample Hadoop cluster deployment <code>{GATEWAY_HOME}/deployments/sandbox.xml</code>.</p><h3><a id="HBase/Stargate+client+DSL"></a>HBase/Stargate client DSL</h3><h4><a id="Usage"></a>Usage</h4><p>For more details about client DSL usage please follow this [page|https://cwiki.apache.org/confluence/display/KNOX/Client+Usage].</p><h5><a id="systemVersion()+-+Query+Software+Version."></a>systemVersion() - Query Software Version.</h5>
 <ul>
   <li>Request
   <ul>
@@ -2114,14 +2222,7 @@ session.shutdown(10, SECONDS)
 </code></pre><h4><a id="Delete+table"></a>Delete table</h4>
 <pre><code>% curl -ik -u guest:guest-password\
  -X DELETE &#39;https://localhost:8443/gateway/sandbox/hbase/table1/schema&#39;
-</code></pre><h3><a id="Hive"></a>Hive</h3><p>TODO</p><h4><a id="Hive+URL+Mapping"></a>Hive URL Mapping</h4><p>TODO</p><h4><a id="Hive+Examples"></a>Hive Examples</h4><p>This guide provides detailed examples for how to to some basic interactions with Hive via the Apache Knox Gateway.</p><h5><a id="Assumptions"></a>Assumptions</h5><p>This document assumes a few things about your environment in order to simplify the examples.</p>
-<ol>
-  <li>The JVM is executable as simply java.</li>
-  <li>The Apache Knox Gateway is installed and functional.</li>
-  <li>Minor Hive version is 0.12.0.</li>
-  <li>The example commands are executed within the context of the GATEWAY_HOME current directory.  The GATEWAY_HOME directory is the directory within the Apache Knox Gateway installation that contains the README file and the bin, conf and deployments directories.</li>
-  <li>A few examples optionally require the use of commands from a standard Groovy installation.  These examples are optional but to try them you will need Groovy [installed|http://groovy.codehaus.org/Installing+Groovy].</li>
-</ol><h5><a id="Setup"></a>Setup</h5>
+</code></pre><h3><a id="Hive"></a>Hive</h3><p>TODO</p><h4><a id="Hive+URL+Mapping"></a>Hive URL Mapping</h4><p>TODO</p><h4><a id="Hive+Examples"></a>Hive Examples</h4><p>This guide provides detailed examples for how to to some basic interactions with Hive via the Apache Knox Gateway.</p><h5><a id="Hive+Setup"></a>Hive Setup</h5>
 <ol>
   <li>Make sure you are running the correct version of Hive to ensure JDBC/Thrift/HTTP support.</li>
   <li>Make sure Hive is running on the correct port.</li>

Modified: incubator/knox/site/index.html
URL: http://svn.apache.org/viewvc/incubator/knox/site/index.html?rev=1529302&r1=1529301&r2=1529302&view=diff
==============================================================================
--- incubator/knox/site/index.html (original)
+++ incubator/knox/site/index.html Fri Oct  4 21:04:07 2013
@@ -1,5 +1,5 @@
 <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">
-<!-- Generated by Apache Maven Doxia Site Renderer 1.3 at Oct 3, 2013 -->
+<!-- Generated by Apache Maven Doxia Site Renderer 1.3 at Oct 4, 2013 -->
 <html xmlns="http://www.w3.org/1999/xhtml" xml:lang="en" lang="en">
   <head>
     <meta http-equiv="Content-Type" content="text/html; charset=UTF-8" />
@@ -10,7 +10,7 @@
       @import url("./css/site.css");
     </style>
     <link rel="stylesheet" href="./css/print.css" type="text/css" media="print" />
-    <meta name="Date-Revision-yyyymmdd" content="20131003" />
+    <meta name="Date-Revision-yyyymmdd" content="20131004" />
     <meta http-equiv="Content-Language" content="en" />
                                                     
 <script type="text/javascript">var _gaq = _gaq || [];
@@ -57,7 +57,7 @@
                         <a href="https://cwiki.apache.org/confluence/display/KNOX/Index" class="externalLink" title="Wiki">Wiki</a>
               
                     
-                &nbsp;| <span id="publishDate">Last Published: 2013-10-03</span>
+                &nbsp;| <span id="publishDate">Last Published: 2013-10-04</span>
               &nbsp;| <span id="projectVersion">Version: 0.0.0-SNAPSHOT</span>
             </div>
       <div class="clear">

Modified: incubator/knox/site/issue-tracking.html
URL: http://svn.apache.org/viewvc/incubator/knox/site/issue-tracking.html?rev=1529302&r1=1529301&r2=1529302&view=diff
==============================================================================
--- incubator/knox/site/issue-tracking.html (original)
+++ incubator/knox/site/issue-tracking.html Fri Oct  4 21:04:07 2013
@@ -1,5 +1,5 @@
 <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">
-<!-- Generated by Apache Maven Doxia Site Renderer 1.3 at Oct 3, 2013 -->
+<!-- Generated by Apache Maven Doxia Site Renderer 1.3 at Oct 4, 2013 -->
 <html xmlns="http://www.w3.org/1999/xhtml" xml:lang="en" lang="en">
   <head>
     <meta http-equiv="Content-Type" content="text/html; charset=UTF-8" />
@@ -10,7 +10,7 @@
       @import url("./css/site.css");
     </style>
     <link rel="stylesheet" href="./css/print.css" type="text/css" media="print" />
-    <meta name="Date-Revision-yyyymmdd" content="20131003" />
+    <meta name="Date-Revision-yyyymmdd" content="20131004" />
     <meta http-equiv="Content-Language" content="en" />
                                                     
 <script type="text/javascript">var _gaq = _gaq || [];
@@ -57,7 +57,7 @@
                         <a href="https://cwiki.apache.org/confluence/display/KNOX/Index" class="externalLink" title="Wiki">Wiki</a>
               
                     
-                &nbsp;| <span id="publishDate">Last Published: 2013-10-03</span>
+                &nbsp;| <span id="publishDate">Last Published: 2013-10-04</span>
               &nbsp;| <span id="projectVersion">Version: 0.0.0-SNAPSHOT</span>
             </div>
       <div class="clear">

Modified: incubator/knox/site/license.html
URL: http://svn.apache.org/viewvc/incubator/knox/site/license.html?rev=1529302&r1=1529301&r2=1529302&view=diff
==============================================================================
--- incubator/knox/site/license.html (original)
+++ incubator/knox/site/license.html Fri Oct  4 21:04:07 2013
@@ -1,5 +1,5 @@
 <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">
-<!-- Generated by Apache Maven Doxia Site Renderer 1.3 at Oct 3, 2013 -->
+<!-- Generated by Apache Maven Doxia Site Renderer 1.3 at Oct 4, 2013 -->
 <html xmlns="http://www.w3.org/1999/xhtml" xml:lang="en" lang="en">
   <head>
     <meta http-equiv="Content-Type" content="text/html; charset=UTF-8" />
@@ -10,7 +10,7 @@
       @import url("./css/site.css");
     </style>
     <link rel="stylesheet" href="./css/print.css" type="text/css" media="print" />
-    <meta name="Date-Revision-yyyymmdd" content="20131003" />
+    <meta name="Date-Revision-yyyymmdd" content="20131004" />
     <meta http-equiv="Content-Language" content="en" />
                                                     
 <script type="text/javascript">var _gaq = _gaq || [];
@@ -57,7 +57,7 @@
                         <a href="https://cwiki.apache.org/confluence/display/KNOX/Index" class="externalLink" title="Wiki">Wiki</a>
               
                     
-                &nbsp;| <span id="publishDate">Last Published: 2013-10-03</span>
+                &nbsp;| <span id="publishDate">Last Published: 2013-10-04</span>
               &nbsp;| <span id="projectVersion">Version: 0.0.0-SNAPSHOT</span>
             </div>
       <div class="clear">

Modified: incubator/knox/site/mail-lists.html
URL: http://svn.apache.org/viewvc/incubator/knox/site/mail-lists.html?rev=1529302&r1=1529301&r2=1529302&view=diff
==============================================================================
--- incubator/knox/site/mail-lists.html (original)
+++ incubator/knox/site/mail-lists.html Fri Oct  4 21:04:07 2013
@@ -1,5 +1,5 @@
 <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">
-<!-- Generated by Apache Maven Doxia Site Renderer 1.3 at Oct 3, 2013 -->
+<!-- Generated by Apache Maven Doxia Site Renderer 1.3 at Oct 4, 2013 -->
 <html xmlns="http://www.w3.org/1999/xhtml" xml:lang="en" lang="en">
   <head>
     <meta http-equiv="Content-Type" content="text/html; charset=UTF-8" />
@@ -10,7 +10,7 @@
       @import url("./css/site.css");
     </style>
     <link rel="stylesheet" href="./css/print.css" type="text/css" media="print" />
-    <meta name="Date-Revision-yyyymmdd" content="20131003" />
+    <meta name="Date-Revision-yyyymmdd" content="20131004" />
     <meta http-equiv="Content-Language" content="en" />
                                                     
 <script type="text/javascript">var _gaq = _gaq || [];
@@ -57,7 +57,7 @@
                         <a href="https://cwiki.apache.org/confluence/display/KNOX/Index" class="externalLink" title="Wiki">Wiki</a>
               
                     
-                &nbsp;| <span id="publishDate">Last Published: 2013-10-03</span>
+                &nbsp;| <span id="publishDate">Last Published: 2013-10-04</span>
               &nbsp;| <span id="projectVersion">Version: 0.0.0-SNAPSHOT</span>
             </div>
       <div class="clear">

Modified: incubator/knox/site/project-info.html
URL: http://svn.apache.org/viewvc/incubator/knox/site/project-info.html?rev=1529302&r1=1529301&r2=1529302&view=diff
==============================================================================
--- incubator/knox/site/project-info.html (original)
+++ incubator/knox/site/project-info.html Fri Oct  4 21:04:07 2013
@@ -1,5 +1,5 @@
 <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">
-<!-- Generated by Apache Maven Doxia Site Renderer 1.3 at Oct 3, 2013 -->
+<!-- Generated by Apache Maven Doxia Site Renderer 1.3 at Oct 4, 2013 -->
 <html xmlns="http://www.w3.org/1999/xhtml" xml:lang="en" lang="en">
   <head>
     <meta http-equiv="Content-Type" content="text/html; charset=UTF-8" />
@@ -10,7 +10,7 @@
       @import url("./css/site.css");
     </style>
     <link rel="stylesheet" href="./css/print.css" type="text/css" media="print" />
-    <meta name="Date-Revision-yyyymmdd" content="20131003" />
+    <meta name="Date-Revision-yyyymmdd" content="20131004" />
     <meta http-equiv="Content-Language" content="en" />
                                                     
 <script type="text/javascript">var _gaq = _gaq || [];
@@ -57,7 +57,7 @@
                         <a href="https://cwiki.apache.org/confluence/display/KNOX/Index" class="externalLink" title="Wiki">Wiki</a>
               
                     
-                &nbsp;| <span id="publishDate">Last Published: 2013-10-03</span>
+                &nbsp;| <span id="publishDate">Last Published: 2013-10-04</span>
               &nbsp;| <span id="projectVersion">Version: 0.0.0-SNAPSHOT</span>
             </div>
       <div class="clear">

Modified: incubator/knox/site/team-list.html
URL: http://svn.apache.org/viewvc/incubator/knox/site/team-list.html?rev=1529302&r1=1529301&r2=1529302&view=diff
==============================================================================
--- incubator/knox/site/team-list.html (original)
+++ incubator/knox/site/team-list.html Fri Oct  4 21:04:07 2013
@@ -1,5 +1,5 @@
 <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">
-<!-- Generated by Apache Maven Doxia Site Renderer 1.3 at Oct 3, 2013 -->
+<!-- Generated by Apache Maven Doxia Site Renderer 1.3 at Oct 4, 2013 -->
 <html xmlns="http://www.w3.org/1999/xhtml" xml:lang="en" lang="en">
   <head>
     <meta http-equiv="Content-Type" content="text/html; charset=UTF-8" />
@@ -10,7 +10,7 @@
       @import url("./css/site.css");
     </style>
     <link rel="stylesheet" href="./css/print.css" type="text/css" media="print" />
-    <meta name="Date-Revision-yyyymmdd" content="20131003" />
+    <meta name="Date-Revision-yyyymmdd" content="20131004" />
     <meta http-equiv="Content-Language" content="en" />
                                                     
 <script type="text/javascript">var _gaq = _gaq || [];
@@ -57,7 +57,7 @@
                         <a href="https://cwiki.apache.org/confluence/display/KNOX/Index" class="externalLink" title="Wiki">Wiki</a>
               
                     
-                &nbsp;| <span id="publishDate">Last Published: 2013-10-03</span>
+                &nbsp;| <span id="publishDate">Last Published: 2013-10-04</span>
               &nbsp;| <span id="projectVersion">Version: 0.0.0-SNAPSHOT</span>
             </div>
       <div class="clear">

Modified: incubator/knox/trunk/books/0.3.0/book.md
URL: http://svn.apache.org/viewvc/incubator/knox/trunk/books/0.3.0/book.md?rev=1529302&r1=1529301&r2=1529302&view=diff
==============================================================================
--- incubator/knox/trunk/books/0.3.0/book.md (original)
+++ incubator/knox/trunk/books/0.3.0/book.md Fri Oct  4 21:04:07 2013
@@ -35,8 +35,8 @@
     * #[Verify]
     * #[Install]
     * #[Supported Services]
-    * #[Basic Usage]
     * #[Sandbox Configuration]
+    * #[Basic Usage]
 * #[Gateway Details]
     * #[Configuration]
     * #[Authentication]

Modified: incubator/knox/trunk/books/0.3.0/book_getting-started.md
URL: http://svn.apache.org/viewvc/incubator/knox/trunk/books/0.3.0/book_getting-started.md?rev=1529302&r1=1529301&r2=1529302&view=diff
==============================================================================
--- incubator/knox/trunk/books/0.3.0/book_getting-started.md (original)
+++ incubator/knox/trunk/books/0.3.0/book_getting-started.md Fri Oct  4 21:04:07 2013
@@ -182,6 +182,11 @@ Only more recent versions of some Hadoop
 | Hive (via ODBC)    | 0.12.0     | ![?]        | ![?]   |
 
 
+### Sandbox Configuration ###
+
+TODO
+
+
 ### Basic Usage ###
 
 The steps described below are intended to get the Knox Gateway server up and running in its default configuration.

Modified: incubator/knox/trunk/books/0.3.0/book_service-details.md
URL: http://svn.apache.org/viewvc/incubator/knox/trunk/books/0.3.0/book_service-details.md?rev=1529302&r1=1529301&r2=1529302&view=diff
==============================================================================
--- incubator/knox/trunk/books/0.3.0/book_service-details.md (original)
+++ incubator/knox/trunk/books/0.3.0/book_service-details.md Fri Oct  4 21:04:07 2013
@@ -17,7 +17,53 @@
 
 ## Service Details ##
 
-TODO - Service details overview
+In the sections that follow the integrations currently available out of the box with the gateway will be described.
+In general these sections will include examples that demonstrate how to access each of these services via the gateway.
+In many cases this will include both the use of [cURL][curl] as a REST API client as well as the use of the Knox Client DSL.
+You may notice that there are some minor differences between using the REST API of a given service via the gateway.
+In general this is necessary in order to achieve the goal of leaking internal Hadoop cluster details to the client.
+
+Keep in mind that the gateway uses a plugin model for supporting Hadoop services.
+Check back with a the [Apache Knox][site] site for the latest news on plugin availability.
+You can also create your own custom plugin to extend the capabilities of the gateway.
+
+### Assumptions
+
+This document assumes a few things about your environment in order to simplify the examples.
+
+* The JVM is executable as simply java.
+* The Apache Knox Gateway is installed and functional.
+* The example commands are executed within the context of the GATEWAY_HOME current directory.
+The GATEWAY_HOME directory is the directory within the Apache Knox Gateway installation that contains the README file and the bin, conf and deployments directories.
+* The [cURL][curl] command line HTTP client utility is installed and functional.
+* A few examples optionally require the use of commands from a standard Groovy installation.
+These examples are optional but to try them you will need Groovy [installed](http://groovy.codehaus.org/Installing+Groovy).
+* The default configuration for all of the samples is setup for use with Hortonworks' [Sandbox][sandbox] version 2.
+
+### Customization
+
+Using these samples with other Hadoop installations will require changes to the steps describe here as well as changes to referenced sample scripts.
+This will also likely require changes to the gateway's default configuration.
+In particular host names, ports user names and password may need to be changes to match your environment.
+These changes may need to be made to gateway configuration and also the Groovy sample script files in the distribution.
+All of the values that may need to be customized in the sample scripts can be found together at the top of each of these files.
+
+### cURL
+
+The cURL HTTP client command line utility is used extensively in the examples for each service.
+In particular this form of the cURL command line is used repeatedly.
+
+    curl -i -k -u guest:guest-password ...
+
+The option -i (aka --include) is used to output HTTP response header information.
+This will be important when the content of the HTTP Location header is required for subsequent requests.
+
+The option -k (aka --insecure) is used to avoid any issues resulting the use of demonstration SSL certificates.
+
+The option -u (aka --user) is used to provide the credentials to be used when the client is challenged by the gateway.
+
+Keep in mind that the samples do not use the cookie features of cURL for the sake of simplicity.
+Therefore each request via cURL will result in an authentication.
 
 <<service_webhdfs.md>>
 <<service_webhcat.md>>

Modified: incubator/knox/trunk/books/0.3.0/service_hbase.md
URL: http://svn.apache.org/viewvc/incubator/knox/trunk/books/0.3.0/service_hbase.md?rev=1529302&r1=1529301&r2=1529302&view=diff
==============================================================================
--- incubator/knox/trunk/books/0.3.0/service_hbase.md (original)
+++ incubator/knox/trunk/books/0.3.0/service_hbase.md Fri Oct  4 21:04:07 2013
@@ -30,18 +30,10 @@ TODO
 The examples below illustrate the set of basic operations with HBase instance using Stargate REST API.
 Use following link to get more more details about HBase/Stargate API: http://wiki.apache.org/hadoop/Hbase/Stargate.
 
-### Assumptions ###
-
-This document assumes a few things about your environment in order to simplify the examples.
-
-1. The JVM is executable as simply java.
-2. The Apache Knox Gateway is installed and functional.
-3. The example commands are executed within the context of the GATEWAY_HOME current directory.  The GATEWAY_HOME directory is the directory within the Apache Knox Gateway installation that contains the README file and the bin, conf and deployments directories.
-4. A few examples optionally require the use of commands from a standard Groovy installation.  These examples are optional but to try them you will need Groovy [installed|http://groovy.codehaus.org/Installing+Groovy].
-
 ### HBase Stargate Setup ###
 
 #### Launch Stargate ####
+
 The command below launches the Stargate daemon on port 60080
 
     sudo /usr/lib/hbase/bin/hbase-daemon.sh start rest -p 60080
@@ -61,9 +53,10 @@ The command below launches the Stargate 
 
 60080 post is used because it was specified in sample Hadoop cluster deployment `{GATEWAY_HOME}/deployments/sandbox.xml`.
 
-### HBase/Stargate via KnoxShell DSL
+### HBase/Stargate client DSL
 
 #### Usage
+
 For more details about client DSL usage please follow this [page|https://cwiki.apache.org/confluence/display/KNOX/Client+Usage].
  
 ##### systemVersion() - Query Software Version.
@@ -112,6 +105,7 @@ For more details about client DSL usage 
     * `HBase.session(session).table().schema().now().string`
 
 ##### table(String tableName).create() - Create Table Schema.
+
 * Request
     * attribute(String name, Object value) - the table's attribute.
     * family(String name) - starts family definition. Has sub requests:
@@ -134,6 +128,7 @@ For more details about client DSL usage 
        .now()```
 
 ##### table(String tableName).update() - Update Table Schema.
+
 * Request
     * family(String name) - starts family definition. Has sub requests:
     * attribute(String name, Object value) - the family's attribute.
@@ -151,6 +146,7 @@ For more details about client DSL usage 
          .now()```
 
 ##### table(String tableName).regions() - Query Table Metadata.
+
 * Request
     * No request parameters.
 * Response
@@ -159,6 +155,7 @@ For more details about client DSL usage 
     * `HBase.session(session).table(tableName).regions().now().string`
 
 ##### table(String tableName).delete() - Delete Table.
+
 * Request
     * No request parameters.
 * Response
@@ -167,6 +164,7 @@ For more details about client DSL usage 
     * `HBase.session(session).table(tableName).delete().now()`
 
 ##### table(String tableName).row(String rowId).store() - Cell Store.
+
 * Request
     * column(String family, String qualifier, Object value, Long time) - the data to store; "qualifier" may be "null"; "time" is optional.
 * Response
@@ -182,6 +180,7 @@ For more details about client DSL usage 
          .now()```
 
 ##### table(String tableName).row(String rowId).query() - Cell or Row Query.
+
 * rowId is optional. Querying with null or empty rowId will select all rows.
 * Request
     * column(String family, String qualifier) - the column to select; "qualifier" is optional.
@@ -204,6 +203,7 @@ For more details about client DSL usage 
          .now().string```
 
 ##### table(String tableName).row(String rowId).delete() - Row, Column, or Cell Delete.
+
 * Request
     * column(String family, String qualifier) - the column to delete; "qualifier" is optional.
     * time(Long) - the upper bound for time filtration.
@@ -221,6 +221,7 @@ For more details about client DSL usage 
          .now()```
 
 ##### table(String tableName).scanner().create() - Scanner Creation.
+
 * Request
     * startRow(String) - the lower bound for filtration by row id.
     * endRow(String) - the upper bound for filtration by row id.
@@ -248,6 +249,7 @@ For more details about client DSL usage 
          .now()```
 
 ##### table(String tableName).scanner(String scannerId).getNext() - Scanner Get Next.
+
 * Request
     * No request parameters.
 * Response
@@ -256,6 +258,7 @@ For more details about client DSL usage 
     * `HBase.session(session).table(tableName).scanner(scannerId).getNext().now().string`
 
 ##### table(String tableName).scanner(String scannerId).delete() - Scanner Deletion.
+
 * Request
     * No request parameters.
 * Response

Modified: incubator/knox/trunk/books/0.3.0/service_hive.md
URL: http://svn.apache.org/viewvc/incubator/knox/trunk/books/0.3.0/service_hive.md?rev=1529302&r1=1529301&r2=1529302&view=diff
==============================================================================
--- incubator/knox/trunk/books/0.3.0/service_hive.md (original)
+++ incubator/knox/trunk/books/0.3.0/service_hive.md Fri Oct  4 21:04:07 2013
@@ -27,19 +27,7 @@ TODO
 
 This guide provides detailed examples for how to to some basic interactions with Hive via the Apache Knox Gateway.
 
-##### Assumptions #####
-
-This document assumes a few things about your environment in order to simplify the examples.
-
-1. The JVM is executable as simply java.
-2. The Apache Knox Gateway is installed and functional.
-3. Minor Hive version is 0.12.0.
-4. The example commands are executed within the context of the GATEWAY_HOME current directory.
-   The GATEWAY_HOME directory is the directory within the Apache Knox Gateway installation that contains the README file and the bin, conf and deployments directories.
-5. A few examples optionally require the use of commands from a standard Groovy installation.
-   These examples are optional but to try them you will need Groovy [installed|http://groovy.codehaus.org/Installing+Groovy].
-
-##### Setup #####
+##### Hive Setup #####
 
 1. Make sure you are running the correct version of Hive to ensure JDBC/Thrift/HTTP support.
 2. Make sure Hive is running on the correct port.

Modified: incubator/knox/trunk/books/0.3.0/service_oozie.md
URL: http://svn.apache.org/viewvc/incubator/knox/trunk/books/0.3.0/service_oozie.md?rev=1529302&r1=1529301&r2=1529302&view=diff
==============================================================================
--- incubator/knox/trunk/books/0.3.0/service_oozie.md (original)
+++ incubator/knox/trunk/books/0.3.0/service_oozie.md Fri Oct  4 21:04:07 2013
@@ -27,30 +27,6 @@ TODO
 
 TODO
 
-##### Assumptions
-
-This document assumes a few things about your environment in order to simplify the examples.
-
-* The JVM is executable as simply java.
-* The Apache Knox Gateway is installed and functional.
-* The example commands are executed within the context of the GATEWAY_HOME current directory.
-The GATEWAY_HOME directory is the directory within the Apache Knox Gateway installation that contains the README file and the bin, conf and deployments directories.
-* A few examples optionally require the use of commands from a standard Groovy installation.
-These examples are optional but to try them you will need Groovy [installed](http://groovy.codehaus.org/Installing+Groovy).
-
-#### Customization
-
-These examples may need to be tailored to the execution environment.
-In particular hostnames and ports may need to be changes to match your environment.
-In particular there are two example files in the distribution that may need to be customized.
-Take a moment to review these files.
-All of the values that may need to be customized can be found together at the top of each file.
-
-* samples/ExampleSubmitJob.groovy
-* samples/ExampleSubmitWorkflow.groovy
-
-If you are using the Sandbox VM for your Hadoop cluster you may want to review #[Sandbox Configuration].
-
 #### Example #2: WebHDFS & Oozie via KnoxShell DSL
 
 This example will also submit the familiar WordCount Java MapReduce job to the Hadoop cluster via the gateway using the KnoxShell DSL.

Modified: incubator/knox/trunk/books/0.3.0/service_webhcat.md
URL: http://svn.apache.org/viewvc/incubator/knox/trunk/books/0.3.0/service_webhcat.md?rev=1529302&r1=1529301&r2=1529302&view=diff
==============================================================================
--- incubator/knox/trunk/books/0.3.0/service_webhcat.md (original)
+++ incubator/knox/trunk/books/0.3.0/service_webhcat.md Fri Oct  4 21:04:07 2013
@@ -27,31 +27,6 @@ TODO
 
 TODO
 
-#### Assumptions
-
-This document assumes a few things about your environment in order to simplify the examples.
-
-* The JVM is executable as simply java.
-* The Apache Knox Gateway is installed and functional.
-* The example commands are executed within the context of the GATEWAY_HOME current directory.
-The GATEWAY_HOME directory is the directory within the Apache Knox Gateway installation that contains the README file and the bin, conf and deployments directories.
-* A few examples optionally require the use of commands from a standard Groovy installation.
-These examples are optional but to try them you will need Groovy [installed|http://groovy.codehaus.org/Installing+Groovy].
-
-#### Customization
-
-These examples may need to be tailored to the execution environment.
-In particular hostnames and ports may need to be changes to match your environment.
-In particular there are two example files in the distribution that may need to be customized.
-Take a moment to review these files.
-All of the values that may need to be customized can be found together at the top of each file.
-
-* samples/ExampleSubmitJob.groovy
-* samples/ExampleSubmitWorkflow.groovy
-
-If you are using the Sandbox VM for your Hadoop cluster you may want to review [these configuration tips|Sandbox Configuration].
-
-
 #### Example #1: WebHDFS & Templeton/WebHCat via KnoxShell DSL
 
 This example will submit the familiar WordCount Java MapReduce job to the Hadoop cluster via the gateway using the KnoxShell DSL.

Modified: incubator/knox/trunk/books/0.3.0/service_webhdfs.md
URL: http://svn.apache.org/viewvc/incubator/knox/trunk/books/0.3.0/service_webhdfs.md?rev=1529302&r1=1529301&r2=1529302&view=diff
==============================================================================
--- incubator/knox/trunk/books/0.3.0/service_webhdfs.md (original)
+++ incubator/knox/trunk/books/0.3.0/service_webhdfs.md Fri Oct  4 21:04:07 2013
@@ -17,93 +17,222 @@
 
 ### WebHDFS ###
 
-TODO
+REST API access to HDFS in a Hadoop cluster is provided by WebHDFS.
+The [WebHDFS REST API](http://hadoop.apache.org/docs/stable/webhdfs.html) documentation is available online.
+WebHDFS must be enabled in the hdfs-site.xml configuration file.
+In sandbox this configuration file is located at /etc/hadoop/conf/hdfs-site.xml.
+Note the properties shown below as they are related to configuration required by the gateway.
+Some of these represent the default values and may not actually be present in hdfs-site.xml.
+
+    <property>
+        <name>dfs.webhdfs.enabled</name>
+        <value>true</value>
+    </property>
+    <property>
+        <name>dfs.namenode.rpc-address</name>
+        <value>sandbox.hortonworks.com:8020</value>
+    </property>
+    <property>
+        <name>dfs.namenode.http-address</name>
+        <value>sandbox.hortonworks.com:50070</value>
+    </property>
+    <property>
+        <name>dfs.https.namenode.https-address</name>
+        <value>sandbox.hortonworks.com:50470</value>
+    </property>
+
+The values above need to be reflected in each topology descriptor file deployed to the gateway.
+The gateway by default includes a sample topology descriptor file `{GATEWAY_HOME}/deployments/sandbox.xml`.
+The values in this sample are configured to work with an installed Sandbox VM.
+
+    <service>
+        <role>NAMENODE</role>
+        <url>hdfs://localhost:8020</url>
+    </service>
+    <service>
+        <role>WEBHDFS</role>
+        <url>http://localhost:50070/webhdfs</url>
+    </service>
+
+The URL provided for the role NAMENODE does not result in an endpoint being exposed by the gateway.
+This information is only required so that other URLs can be rewritten that reference the Name Node's RPC address.
+This prevents clients from needed to be aware of the internal cluster details.
 
-#### WebHDFS URL Mapping ####
+By default the gateway is configured to use the HTTP endpoint for WebHDFS in the Sandbox.
+This could alternatively be configured to use the HTTPS endpoint by provided the correct address.
 
-TODO
+#### WebHDFS URL Mapping ####
 
-#### WebHDFS Examples ####
+For Name Node URLs, the mapping of Knox Gateway accessible WebHDFS URLs to direct WebHDFS URLs is simple.
 
-TODO
+| ------- | ----------------------------------------------------------------------------- |
+| Gateway | `https://{gateway-host}:{gateway-port}/{gateway-path}/{cluster-name}/webhdfs` |
+| Cluster | `http://{webhdfs-host}:50070/webhdfs`                                         |
 
+However, there is a subtle difference to URLs that are returned by WebHDFS in the Location header of many requests.
+Direct WebHDFS requests may return Location headers that contain the address of a particular Data Node.
+The gateway will rewrite these URLs to ensure subsequent requests come back through the gateway and internal cluster details are protected.
 
-#### Assumptions
+A WebHDFS request to the Node Node to retrieve a file will return a URL of the form below in the Location header.
 
-This document assumes a few things about your environment in order to simplify the examples.
+    http://{datanode-host}:{data-node-port}/webhdfs/v1/{path}?...
 
-* The JVM is executable as simply java.
-* The Apache Knox Gateway is installed and functional.
-* The example commands are executed within the context of the GATEWAY_HOME current directory.
-The GATEWAY_HOME directory is the directory within the Apache Knox Gateway installation that contains the README file and the bin, conf and deployments directories.
-* A few examples optionally require the use of commands from a standard Groovy installation.
-These examples are optional but to try them you will need Groovy [installed|http://groovy.codehaus.org/Installing+Groovy].
+Note that this URL contains the newtwork location of a Data Node.
+The gateway will rewrite this URL to look like the URL below.
 
-h2. Customization
+    https://{gateway-host}:{gateway-port}/{gateway-path}/{custer-name}/webhdfs/data/v1/{path}?_={encrypted-query-parameters}
 
-These examples may need to be tailored to the execution environment.
-In particular hostnames and ports may need to be changes to match your environment.
-In particular there are two example files in the distribution that may need to be customized.
-Take a moment to review these files.
-All of the values that may need to be customized can be found together at the top of each file.
+The `{encrypted-query-parameters}` will contain the `{datanode-host}` and `{datanode-port}` information.
+This information along with the original query parameters are encrypted so that the internal Hadoop details are protected.
 
-* samples/ExampleWebHDFS.groovy
+#### WebHDFS Examples ####
 
+The examples below upload a file, download the file and list the contents of the directory.
 
-#### WebHDFS via KnoxShell DSL
+##### WebHDFS via client DSL
 
-You can use the Groovy interpreter provided with the distribution.
+You can use the Groovy example scripts and interpreter provided with the distribution.
 
-    java -jar bin/shell.jar samples/ExampleWebHDFS.groovy
+    java -jar bin/shell.jar samples/ExampleWebHfsPutGet.groovy
+    java -jar bin/shell.jar samples/ExampleWebHfsLs.groovy
 
-You can manually type in the KnoxShell DSL script into the interactive Groovy interpreter provided with the distribution.
+You can manually type the client DSL script into the KnoxShell interactive Groovy interpreter provided with the distribution.
+The command below starts the KnoxShell in interactive mode.
 
     java -jar bin/shell.jar
 
-Each line from the file below will need to be typed or copied into the interactive shell.
-
-##### samples/ExampleHdfs.groovy
+Each line below could be typed or copied into the interactive shell and executed.
+This is provided as an example to illustrate the use of the client DSL.
 
-    import groovy.json.JsonSlurper
+    # Import the client DSL and a useful utilities for working with JSON.
     import org.apache.hadoop.gateway.shell.Hadoop
     import org.apache.hadoop.gateway.shell.hdfs.Hdfs
+    import groovy.json.JsonSlurper
 
+    # Setup some basic config.
     gateway = "https://localhost:8443/gateway/sandbox"
-    username = "bob"
-    password = "bob-password"
-    dataFile = "README"
+    username = "guest"
+    password = "guest-password"
 
+    # Start the session.
     session = Hadoop.login( gateway, username, password )
-    Hdfs.rm( session ).file( "/tmp/example" ).recursive().now()
-    Hdfs.put( session ).file( dataFile ).to( "/tmp/example/README" ).now()
-    text = Hdfs.ls( session ).dir( "/tmp/example" ).now().string
+
+    # Cleanup anything leftover from a previous run.
+    Hdfs.rm( session ).file( "/user/guest/example" ).recursive().now()
+
+    # Upload the README to HDFS.
+    Hdfs.put( session ).file( README ).to( "/user/guest/example/README" ).now()
+
+    # Download the README from HDFS.
+    text = Hdfs.get( session ).from( "/user/guest/example/README" ).now().string
+    println text
+
+    # List the contents of the directory.
+    text = Hdfs.ls( session ).dir( "/user/guest/example" ).now().string
     json = (new JsonSlurper()).parseText( text )
     println json.FileStatuses.FileStatus.pathSuffix
-    text = Hdfs.get( session ).from( "/tmp/example/README" ).now().string
-    println text
-    Hdfs.rm( session ).file( "/tmp/example" ).recursive().now()
+
+    # Cleanup the directory.
+    Hdfs.rm( session ).file( "/user/guest/example" ).recursive().now()
+
+    # Clean the session.
     session.shutdown()
 
 
-#### WebHDFS via cURL
+##### WebHDFS via cURL
+
+Use can use cURL to directly invoke the REST APIs via the gateway.
+
+###### Optionally cleanup the sample directory in case a previous example was run without cleaning up.
+
+    curl -i -k -u guest:guest-password -X DELETE \
+        'https://localhost:8443/gateway/sandbox/webhdfs/v1/user/guest/example?op=DELETE&recursive=true'
+
+###### Register the name for a sample file README in /user/guest/example.
 
-    # 1. Optionally cleanup the sample directory in case a previous example was run without cleaning up.
-    curl -i -k -u bob:bob-password -X DELETE \
-        'https://localhost:8443/gateway/sandbox/webhdfs/v1/tmp/test?op=DELETE&recursive=true'
-
-    # 2. Create the inode for a sample input file readme.txt in /tmp/test/input.
-    curl -i -k -u bob:bob-password -X PUT \
-        'https://localhost:8443/gateway/sandbox/webhdfs/v1/tmp/test/input/README?op=CREATE'
-
-    # 3. Upload readme.txt to /tmp/test/input.  Use the readme.txt in {GATEWAY_HOME}.
-    # The sample below uses this README file found in {GATEWAY_HOME}.
-    curl -i -k -u bob:bob-password -T README -X PUT \
+    curl -i -k -u guest:guest-password -X PUT \
+        'https://localhost:8443/gateway/sandbox/webhdfs/v1/user/guest/example/README?op=CREATE'
+
+###### Upload README to /user/guest/example.  Use the README in {GATEWAY_HOME}.
+
+    curl -i -k -u guest:guest-password -T README -X PUT \
+        '{Value of Location header from command above}'
+
+###### List the contents of the directory /user/guest/example.
+
+    curl -i -k -u guest:guest-password -X GET \
+        'https://localhost:8443/gateway/sandbox/webhdfs/v1/user/guest/example?op=LISTSTATUS'
+
+###### Request the content of the README file in /user/guest/example.
+
+    curl -i -k -u guest:guest-password -X GET \
+        'https://localhost:8443/gateway/sandbox/webhdfs/v1/user/guest/example/README?op=OPEN'
+
+###### Read the content of the file.
+
+    curl -i -k -u guest:guest-password -X GET \
         '{Value of Location header from command above}'
 
-    # 4. List the contents of the output directory /tmp/test/output
-    curl -i -k -u bob:bob-password -X GET \
-        'https://localhost:8443/gateway/sandbox/webhdfs/v1/tmp/test/input?op=LISTSTATUS'
-
-    # 5. Optionally cleanup the test directory
-    curl -i -k -u bob:bob-password -X DELETE \
-        'https://localhost:8443/gateway/sandbox/webhdfs/v1/tmp/test?op=DELETE&recursive=true'
+###### Optionally cleanup the example directory.
+
+    curl -i -k -u guest:guest-password -X DELETE \
+        'https://localhost:8443/gateway/sandbox/webhdfs/v1/user/guest/example?op=DELETE&recursive=true'
+
+
+##### WebHDFS client DSL
+
+###### get - Get a file from HDFS (OPEN).
+
+* Request
+    * from( String name ) - The full name of the file in HDFS.
+    * file( String name ) - The name name of a local file to create with the content.
+* Response
+    * BasicResponse
+    * If file parameter specified content will be streamed to file.
+* Example
+    * `Hdfs.get( session ).from( "/user/guest/example/README" ).now().string`
+
+###### ls - Query the contents of a directory (LISTSTATUS)
+
+* Request
+    * dir( String name ) - The full name of the directory in HDFS.
+* Response
+    * BasicResponse
+* Example
+    * `Hdfs.ls( session ).dir( "/user/guest/example" ).now().string`
+
+###### mkdir - Create a directory in HDFS (MKDIRS)
+
+* Request
+    * dir( String name ) - The full name of the directory to create in HDFS.
+    * perm( String perm ) - The permissions for the directory (e.g. 644).
+* Response
+    * BasicResponse
+* Example
+    * `Hdfs.mkdir( session ).dir( "/user/guest/example" ).now()`
+
+###### put - Write a file into HDFS (CREATE)
+
+* Request
+    * text( String text ) - Text to upload to HDFS.  Takes precidence over file if both present.
+    * file( String name ) - The name of a local file to upload to HDFS.
+    * to( String name ) - The fully qualified name to create in HDFS.
+* Response
+    * BasicResponse
+* Example
+    * `Hdfs.put( session ).file( README ).to( "/user/guest/example/README" ).now()`
+
+###### rm - Delete a file or directory (DELETE)
+
+* Request
+    * file( String name ) - The fully qualified file or directory name in HDFS.
+    * recursive( Boolean recursive ) - Delete directory and all of its contents if True.
+* Response
+    * BasicResponse
+* Example
+    * `Hdfs.rm( session ).file( "/user/guest/example" ).recursive().now()`
+
+
+
+
+

Modified: incubator/knox/trunk/books/common/header.md
URL: http://svn.apache.org/viewvc/incubator/knox/trunk/books/common/header.md?rev=1529302&r1=1529301&r2=1529302&view=diff
==============================================================================
--- incubator/knox/trunk/books/common/header.md (original)
+++ incubator/knox/trunk/books/common/header.md Fri Oct  4 21:04:07 2013
@@ -18,6 +18,7 @@
 <link href="book.css" rel="stylesheet"/>
 
 [asl]: http://www.apache.org/licenses/LICENSE-2.0
+[curl]: http://curl.haxx.se/
 [site]: http://knox.incubator.apache.org
 [jira]: https://issues.apache.org/jira/browse/KNOX
 [mirror]: http://www.apache.org/dyn/closer.cgi/incubator/knox



Mime
View raw message