knox-commits mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From dillido...@apache.org
Subject svn commit: r1556393 [3/5] - in /incubator/knox: site/ site/books/knox-incubating-0-3-0/ site/books/knox-incubating-0-4-0/ trunk/ trunk/books/0.4.0/
Date Tue, 07 Jan 2014 22:45:37 GMT
Propchange: incubator/knox/site/books/knox-incubating-0-4-0/knox-logo.gif
------------------------------------------------------------------------------
    svn:mime-type = application/octet-stream

Added: incubator/knox/site/books/knox-incubating-0-4-0/plus.png
URL: http://svn.apache.org/viewvc/incubator/knox/site/books/knox-incubating-0-4-0/plus.png?rev=1556393&view=auto
==============================================================================
Binary file - no diff available.

Propchange: incubator/knox/site/books/knox-incubating-0-4-0/plus.png
------------------------------------------------------------------------------
    svn:mime-type = application/octet-stream

Added: incubator/knox/site/books/knox-incubating-0-4-0/question.png
URL: http://svn.apache.org/viewvc/incubator/knox/site/books/knox-incubating-0-4-0/question.png?rev=1556393&view=auto
==============================================================================
Binary file - no diff available.

Propchange: incubator/knox/site/books/knox-incubating-0-4-0/question.png
------------------------------------------------------------------------------
    svn:mime-type = application/octet-stream

Added: incubator/knox/site/books/knox-incubating-0-4-0/star.png
URL: http://svn.apache.org/viewvc/incubator/knox/site/books/knox-incubating-0-4-0/star.png?rev=1556393&view=auto
==============================================================================
Binary file - no diff available.

Propchange: incubator/knox/site/books/knox-incubating-0-4-0/star.png
------------------------------------------------------------------------------
    svn:mime-type = application/octet-stream

Added: incubator/knox/site/books/knox-incubating-0-4-0/stop.png
URL: http://svn.apache.org/viewvc/incubator/knox/site/books/knox-incubating-0-4-0/stop.png?rev=1556393&view=auto
==============================================================================
Binary file - no diff available.

Propchange: incubator/knox/site/books/knox-incubating-0-4-0/stop.png
------------------------------------------------------------------------------
    svn:mime-type = application/octet-stream

Added: incubator/knox/site/books/knox-incubating-0-4-0/warning.png
URL: http://svn.apache.org/viewvc/incubator/knox/site/books/knox-incubating-0-4-0/warning.png?rev=1556393&view=auto
==============================================================================
Binary file - no diff available.

Propchange: incubator/knox/site/books/knox-incubating-0-4-0/warning.png
------------------------------------------------------------------------------
    svn:mime-type = application/octet-stream

Added: incubator/knox/site/books/knox-incubating-0-4-0/workflow-configuration.xml
URL: http://svn.apache.org/viewvc/incubator/knox/site/books/knox-incubating-0-4-0/workflow-configuration.xml?rev=1556393&view=auto
==============================================================================
--- incubator/knox/site/books/knox-incubating-0-4-0/workflow-configuration.xml (added)
+++ incubator/knox/site/books/knox-incubating-0-4-0/workflow-configuration.xml Tue Jan  7 22:45:36 2014
@@ -0,0 +1,43 @@
+<!--
+   Licensed to the Apache Software Foundation (ASF) under one or more
+   contributor license agreements.  See the NOTICE file distributed with
+   this work for additional information regarding copyright ownership.
+   The ASF licenses this file to You under the Apache License, Version 2.0
+   (the "License"); you may not use this file except in compliance with
+   the License.  You may obtain a copy of the License at
+
+       http://www.apache.org/licenses/LICENSE-2.0
+
+   Unless required by applicable law or agreed to in writing, software
+   distributed under the License is distributed on an "AS IS" BASIS,
+   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+   See the License for the specific language governing permissions and
+   limitations under the License.
+-->
+
+<configuration>
+    <property>
+        <name>user.name</name>
+        <value>default</value>
+    </property>
+    <property>
+        <name>nameNode</name>
+        <value>default</value>
+    </property>
+    <property>
+        <name>jobTracker</name>
+        <value>default</value>
+    </property>
+    <property>
+        <name>inputDir</name>
+        <value>/user/guest/example/input</value>
+    </property>
+    <property>
+        <name>outputDir</name>
+        <value>/user/guest/example/output</value>
+    </property>
+    <property>
+        <name>oozie.wf.application.path</name>
+        <value>/user/guest/example</value>
+    </property>
+</configuration>

Added: incubator/knox/site/books/knox-incubating-0-4-0/workflow-definition.xml
URL: http://svn.apache.org/viewvc/incubator/knox/site/books/knox-incubating-0-4-0/workflow-definition.xml?rev=1556393&view=auto
==============================================================================
--- incubator/knox/site/books/knox-incubating-0-4-0/workflow-definition.xml (added)
+++ incubator/knox/site/books/knox-incubating-0-4-0/workflow-definition.xml Tue Jan  7 22:45:36 2014
@@ -0,0 +1,35 @@
+<!--
+   Licensed to the Apache Software Foundation (ASF) under one or more
+   contributor license agreements.  See the NOTICE file distributed with
+   this work for additional information regarding copyright ownership.
+   The ASF licenses this file to You under the Apache License, Version 2.0
+   (the "License"); you may not use this file except in compliance with
+   the License.  You may obtain a copy of the License at
+
+       http://www.apache.org/licenses/LICENSE-2.0
+
+   Unless required by applicable law or agreed to in writing, software
+   distributed under the License is distributed on an "AS IS" BASIS,
+   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+   See the License for the specific language governing permissions and
+   limitations under the License.
+-->
+
+<workflow-app xmlns="uri:oozie:workflow:0.2" name="wordcount-workflow">
+    <start to="root-node"/>
+    <action name="root-node">
+        <java>
+            <job-tracker>${jobTracker}</job-tracker>
+            <name-node>${nameNode}</name-node>
+            <main-class>org.apache.hadoop.examples.WordCount</main-class>
+            <arg>${inputDir}</arg>
+            <arg>${outputDir}</arg>
+        </java>
+        <ok to="end"/>
+        <error to="fail"/>
+    </action>
+    <kill name="fail">
+        <message>Java failed, error message[${wf:errorMessage(wf:lastErrorNode())}]</message>
+    </kill>
+    <end name="end"/>
+</workflow-app>

Modified: incubator/knox/site/index.html
URL: http://svn.apache.org/viewvc/incubator/knox/site/index.html?rev=1556393&r1=1556392&r2=1556393&view=diff
==============================================================================
--- incubator/knox/site/index.html (original)
+++ incubator/knox/site/index.html Tue Jan  7 22:45:36 2014
@@ -1,5 +1,5 @@
 <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">
-<!-- Generated by Apache Maven Doxia Site Renderer 1.3 at Nov 18, 2013 -->
+<!-- Generated by Apache Maven Doxia Site Renderer 1.3 at Jan 7, 2014 -->
 <html xmlns="http://www.w3.org/1999/xhtml" xml:lang="en" lang="en">
   <head>
     <meta http-equiv="Content-Type" content="text/html; charset=UTF-8" />
@@ -10,7 +10,7 @@
       @import url("./css/site.css");
     </style>
     <link rel="stylesheet" href="./css/print.css" type="text/css" media="print" />
-    <meta name="Date-Revision-yyyymmdd" content="20131118" />
+    <meta name="Date-Revision-yyyymmdd" content="20140107" />
     <meta http-equiv="Content-Language" content="en" />
                                                     
 <script type="text/javascript">var _gaq = _gaq || [];
@@ -57,7 +57,7 @@
                         <a href="https://cwiki.apache.org/confluence/display/KNOX/Index" class="externalLink" title="Wiki">Wiki</a>
               
                     
-                &nbsp;| <span id="publishDate">Last Published: 2013-11-18</span>
+                &nbsp;| <span id="publishDate">Last Published: 2014-01-07</span>
               &nbsp;| <span id="projectVersion">Version: 0.0.0-SNAPSHOT</span>
             </div>
       <div class="clear">

Modified: incubator/knox/site/issue-tracking.html
URL: http://svn.apache.org/viewvc/incubator/knox/site/issue-tracking.html?rev=1556393&r1=1556392&r2=1556393&view=diff
==============================================================================
--- incubator/knox/site/issue-tracking.html (original)
+++ incubator/knox/site/issue-tracking.html Tue Jan  7 22:45:36 2014
@@ -1,5 +1,5 @@
 <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">
-<!-- Generated by Apache Maven Doxia Site Renderer 1.3 at Nov 18, 2013 -->
+<!-- Generated by Apache Maven Doxia Site Renderer 1.3 at Jan 7, 2014 -->
 <html xmlns="http://www.w3.org/1999/xhtml" xml:lang="en" lang="en">
   <head>
     <meta http-equiv="Content-Type" content="text/html; charset=UTF-8" />
@@ -10,7 +10,7 @@
       @import url("./css/site.css");
     </style>
     <link rel="stylesheet" href="./css/print.css" type="text/css" media="print" />
-    <meta name="Date-Revision-yyyymmdd" content="20131118" />
+    <meta name="Date-Revision-yyyymmdd" content="20140107" />
     <meta http-equiv="Content-Language" content="en" />
                                                     
 <script type="text/javascript">var _gaq = _gaq || [];
@@ -57,7 +57,7 @@
                         <a href="https://cwiki.apache.org/confluence/display/KNOX/Index" class="externalLink" title="Wiki">Wiki</a>
               
                     
-                &nbsp;| <span id="publishDate">Last Published: 2013-11-18</span>
+                &nbsp;| <span id="publishDate">Last Published: 2014-01-07</span>
               &nbsp;| <span id="projectVersion">Version: 0.0.0-SNAPSHOT</span>
             </div>
       <div class="clear">

Modified: incubator/knox/site/license.html
URL: http://svn.apache.org/viewvc/incubator/knox/site/license.html?rev=1556393&r1=1556392&r2=1556393&view=diff
==============================================================================
--- incubator/knox/site/license.html (original)
+++ incubator/knox/site/license.html Tue Jan  7 22:45:36 2014
@@ -1,5 +1,5 @@
 <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">
-<!-- Generated by Apache Maven Doxia Site Renderer 1.3 at Nov 18, 2013 -->
+<!-- Generated by Apache Maven Doxia Site Renderer 1.3 at Jan 7, 2014 -->
 <html xmlns="http://www.w3.org/1999/xhtml" xml:lang="en" lang="en">
   <head>
     <meta http-equiv="Content-Type" content="text/html; charset=UTF-8" />
@@ -10,7 +10,7 @@
       @import url("./css/site.css");
     </style>
     <link rel="stylesheet" href="./css/print.css" type="text/css" media="print" />
-    <meta name="Date-Revision-yyyymmdd" content="20131118" />
+    <meta name="Date-Revision-yyyymmdd" content="20140107" />
     <meta http-equiv="Content-Language" content="en" />
                                                     
 <script type="text/javascript">var _gaq = _gaq || [];
@@ -57,7 +57,7 @@
                         <a href="https://cwiki.apache.org/confluence/display/KNOX/Index" class="externalLink" title="Wiki">Wiki</a>
               
                     
-                &nbsp;| <span id="publishDate">Last Published: 2013-11-18</span>
+                &nbsp;| <span id="publishDate">Last Published: 2014-01-07</span>
               &nbsp;| <span id="projectVersion">Version: 0.0.0-SNAPSHOT</span>
             </div>
       <div class="clear">

Modified: incubator/knox/site/mail-lists.html
URL: http://svn.apache.org/viewvc/incubator/knox/site/mail-lists.html?rev=1556393&r1=1556392&r2=1556393&view=diff
==============================================================================
--- incubator/knox/site/mail-lists.html (original)
+++ incubator/knox/site/mail-lists.html Tue Jan  7 22:45:36 2014
@@ -1,5 +1,5 @@
 <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">
-<!-- Generated by Apache Maven Doxia Site Renderer 1.3 at Nov 18, 2013 -->
+<!-- Generated by Apache Maven Doxia Site Renderer 1.3 at Jan 7, 2014 -->
 <html xmlns="http://www.w3.org/1999/xhtml" xml:lang="en" lang="en">
   <head>
     <meta http-equiv="Content-Type" content="text/html; charset=UTF-8" />
@@ -10,7 +10,7 @@
       @import url("./css/site.css");
     </style>
     <link rel="stylesheet" href="./css/print.css" type="text/css" media="print" />
-    <meta name="Date-Revision-yyyymmdd" content="20131118" />
+    <meta name="Date-Revision-yyyymmdd" content="20140107" />
     <meta http-equiv="Content-Language" content="en" />
                                                     
 <script type="text/javascript">var _gaq = _gaq || [];
@@ -57,7 +57,7 @@
                         <a href="https://cwiki.apache.org/confluence/display/KNOX/Index" class="externalLink" title="Wiki">Wiki</a>
               
                     
-                &nbsp;| <span id="publishDate">Last Published: 2013-11-18</span>
+                &nbsp;| <span id="publishDate">Last Published: 2014-01-07</span>
               &nbsp;| <span id="projectVersion">Version: 0.0.0-SNAPSHOT</span>
             </div>
       <div class="clear">

Modified: incubator/knox/site/project-info.html
URL: http://svn.apache.org/viewvc/incubator/knox/site/project-info.html?rev=1556393&r1=1556392&r2=1556393&view=diff
==============================================================================
--- incubator/knox/site/project-info.html (original)
+++ incubator/knox/site/project-info.html Tue Jan  7 22:45:36 2014
@@ -1,5 +1,5 @@
 <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">
-<!-- Generated by Apache Maven Doxia Site Renderer 1.3 at Nov 18, 2013 -->
+<!-- Generated by Apache Maven Doxia Site Renderer 1.3 at Jan 7, 2014 -->
 <html xmlns="http://www.w3.org/1999/xhtml" xml:lang="en" lang="en">
   <head>
     <meta http-equiv="Content-Type" content="text/html; charset=UTF-8" />
@@ -10,7 +10,7 @@
       @import url("./css/site.css");
     </style>
     <link rel="stylesheet" href="./css/print.css" type="text/css" media="print" />
-    <meta name="Date-Revision-yyyymmdd" content="20131118" />
+    <meta name="Date-Revision-yyyymmdd" content="20140107" />
     <meta http-equiv="Content-Language" content="en" />
                                                     
 <script type="text/javascript">var _gaq = _gaq || [];
@@ -57,7 +57,7 @@
                         <a href="https://cwiki.apache.org/confluence/display/KNOX/Index" class="externalLink" title="Wiki">Wiki</a>
               
                     
-                &nbsp;| <span id="publishDate">Last Published: 2013-11-18</span>
+                &nbsp;| <span id="publishDate">Last Published: 2014-01-07</span>
               &nbsp;| <span id="projectVersion">Version: 0.0.0-SNAPSHOT</span>
             </div>
       <div class="clear">

Modified: incubator/knox/site/team-list.html
URL: http://svn.apache.org/viewvc/incubator/knox/site/team-list.html?rev=1556393&r1=1556392&r2=1556393&view=diff
==============================================================================
--- incubator/knox/site/team-list.html (original)
+++ incubator/knox/site/team-list.html Tue Jan  7 22:45:36 2014
@@ -1,5 +1,5 @@
 <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">
-<!-- Generated by Apache Maven Doxia Site Renderer 1.3 at Nov 18, 2013 -->
+<!-- Generated by Apache Maven Doxia Site Renderer 1.3 at Jan 7, 2014 -->
 <html xmlns="http://www.w3.org/1999/xhtml" xml:lang="en" lang="en">
   <head>
     <meta http-equiv="Content-Type" content="text/html; charset=UTF-8" />
@@ -10,7 +10,7 @@
       @import url("./css/site.css");
     </style>
     <link rel="stylesheet" href="./css/print.css" type="text/css" media="print" />
-    <meta name="Date-Revision-yyyymmdd" content="20131118" />
+    <meta name="Date-Revision-yyyymmdd" content="20140107" />
     <meta http-equiv="Content-Language" content="en" />
                                                     
 <script type="text/javascript">var _gaq = _gaq || [];
@@ -57,7 +57,7 @@
                         <a href="https://cwiki.apache.org/confluence/display/KNOX/Index" class="externalLink" title="Wiki">Wiki</a>
               
                     
-                &nbsp;| <span id="publishDate">Last Published: 2013-11-18</span>
+                &nbsp;| <span id="publishDate">Last Published: 2014-01-07</span>
               &nbsp;| <span id="projectVersion">Version: 0.0.0-SNAPSHOT</span>
             </div>
       <div class="clear">

Added: incubator/knox/trunk/books/0.4.0/book.md
URL: http://svn.apache.org/viewvc/incubator/knox/trunk/books/0.4.0/book.md?rev=1556393&view=auto
==============================================================================
--- incubator/knox/trunk/books/0.4.0/book.md (added)
+++ incubator/knox/trunk/books/0.4.0/book.md Tue Jan  7 22:45:36 2014
@@ -0,0 +1,105 @@
+<!--
+   Licensed to the Apache Software Foundation (ASF) under one or more
+   contributor license agreements.  See the NOTICE file distributed with
+   this work for additional information regarding copyright ownership.
+   The ASF licenses this file to You under the Apache License, Version 2.0
+   (the "License"); you may not use this file except in compliance with
+   the License.  You may obtain a copy of the License at
+
+       http://www.apache.org/licenses/LICENSE-2.0
+
+   Unless required by applicable law or agreed to in writing, software
+   distributed under the License is distributed on an "AS IS" BASIS,
+   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+   See the License for the specific language governing permissions and
+   limitations under the License.
+-->
+
+<<../common/header.md>>
+
+<div id="logo" style="width:100%; text-align:center">
+  <!--img src="knox-logo.gif" alt="Knox"/-->
+</div>
+<br>
+  <img src="knox-logo.gif" alt="Knox"/>
+  <img src="apache-incubator-logo.png" align="right" alt="Incubator"/>
+
+# Apache Knox Gateway 0.4.x (Incubator) User's Guide #
+
+## Table Of Contents ##
+
+* #[Introduction]
+* #[Quick Start]
+* #[Apache Knox Details]
+    * #[Layout]
+    * #[Supported Services]
+    * #[Sandbox Configuration]
+* #[Gateway Details]
+    * #[Configuration]
+    * #[Authentication]
+    * #[LDAPGroupLookup]
+    * #[Identity Assertion]
+    * #[Authorization]
+    * #[Configuration]
+    * #[Secure Clusters]
+* #[Client Details]
+* #[Service Details]
+    * #[WebHDFS]
+    * #[WebHCat]
+    * #[Oozie]
+    * #[HBase]
+    * #[Hive]
+* #[Limitations]
+* #[Troubleshooting]
+* #[Export Controls]
+
+
+## Introduction ##
+
+The Apache Knox Gateway is a system that provides a single point of authentication and access for Apache Hadoop services in a cluster.
+The goal is to simplify Hadoop security for both users (i.e. who access the cluster data and execute jobs) and operators (i.e. who control access and manage the cluster).
+The gateway runs as a server (or cluster of servers) that provide centralized access to one or more Hadoop clusters.
+In general the goals of the gateway are as follows:
+
+* Provide perimeter security for Hadoop REST APIs to make Hadoop security easier to setup and use
+    * Provide authentication and token verification at the perimeter
+    * Enable authentication integration with enterprise and cloud identity management systems
+    * Provide service level authorization at the perimeter
+* Expose a single URL hierarchy that aggregates REST APIs of a Hadoop cluster
+    * Limit the network endpoints (and therefore firewall holes) required to access a Hadoop cluster
+    * Hide the internal Hadoop cluster topology from potential attackers
+
+<<quick_start.md>>
+<<book_getting-started.md>>
+<<book_gateway-details.md>>
+<<book_client-details.md>>
+<<book_service-details.md>>
+<<book_limitations.md>>
+<<book_troubleshooting.md>>
+
+
+## Export Controls ##
+
+Apache Knox Gateway includes cryptographic software.
+The country in which you currently reside may have restrictions on the import, possession, use, and/or
+re-export to another country, of encryption software.
+BEFORE using any encryption software, please check your country's laws, regulations and policies concerning the
+import, possession, or use, and re-export of encryption software, to see if this is permitted.
+See http://www.wassenaar.org for more information.
+
+The U.S. Government Department of Commerce, Bureau of Industry and Security (BIS),
+has classified this software as Export Commodity Control Number (ECCN) 5D002.C.1,
+which includes information security software using or performing cryptographic functions with asymmetric algorithms.
+The form and manner of this Apache Software Foundation distribution makes it eligible for export under the
+License Exception ENC Technology Software Unrestricted (TSU) exception
+(see the BIS Export Administration Regulations, Section 740.13) for both object code and source code.
+
+The following provides more details on the included cryptographic software:
+
+* Apache Knox Gateway uses the ApacheDS which in turn uses Bouncy Castle generic encryption libraries.
+* See http://www.bouncycastle.org for more details on Bouncy Castle.
+* See http://directory.apache.org/apacheds for more details on ApacheDS.
+
+
+<<../common/footer.md>>
+

Added: incubator/knox/trunk/books/0.4.0/book_client-details.md
URL: http://svn.apache.org/viewvc/incubator/knox/trunk/books/0.4.0/book_client-details.md?rev=1556393&view=auto
==============================================================================
--- incubator/knox/trunk/books/0.4.0/book_client-details.md (added)
+++ incubator/knox/trunk/books/0.4.0/book_client-details.md Tue Jan  7 22:45:36 2014
@@ -0,0 +1,516 @@
+<!---
+   Licensed to the Apache Software Foundation (ASF) under one or more
+   contributor license agreements.  See the NOTICE file distributed with
+   this work for additional information regarding copyright ownership.
+   The ASF licenses this file to You under the Apache License, Version 2.0
+   (the "License"); you may not use this file except in compliance with
+   the License.  You may obtain a copy of the License at
+
+       http://www.apache.org/licenses/LICENSE-2.0
+
+   Unless required by applicable law or agreed to in writing, software
+   distributed under the License is distributed on an "AS IS" BASIS,
+   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+   See the License for the specific language governing permissions and
+   limitations under the License.
+--->
+
+## Client Details ##
+
+Hadoop requires a client that can be used to interact remotely with the services provided by Hadoop cluster.
+This will also be true when using the Apache Knox Gateway to provide perimeter security and centralized access for these services.
+The two primary existing clients for Hadoop are the CLI (i.e. Command Line Interface, hadoop) and HUE (i.e. Hadoop User Environment).
+For several reasons however, neither of these clients can _currently_ be used to access Hadoop services via the Apache Knox Gateway.
+
+This led to thinking about a very simple client that could help people use and evaluate the gateway.
+The list below outlines the general requirements for such a client.
+
+* Promote the evaluation and adoption of the Apache Knox Gateway
+* Simple to deploy and use on data worker desktops to access to remote Hadoop clusters
+* Simple to extend with new commands both by other Hadoop projects and by the end user
+* Support the notion of a SSO session for multiple Hadoop interactions
+* Support the multiple authentication and federation token capabilities of the Apache Knox Gateway
+* Promote the use of REST APIs as the dominant remote client mechanism for Hadoop services
+* Promote the the sense of Hadoop as a single unified product
+* Aligned with the Apache Knox Gateway's overall goals for security
+
+The result is a very simple DSL ([Domain Specific Language](http://en.wikipedia.org/wiki/Domain-specific_language)) of sorts that is used via [Groovy](http://groovy.codehaus.org) scripts.
+Here is an example of a command that copies a file from the local file system to HDFS.
+
+_Note: The variables session, localFile and remoteFile are assumed to be defined._
+
+    Hdfs.put( session ).file( localFile ).to( remoteFile ).now()
+
+*This work is very early in development but is also very useful in its current state.*
+*We are very interested in receiving feedback about how to improve this feature and the DSL in particular.*
+
+A note of thanks to [REST-assured](https://code.google.com/p/rest-assured/) which provides a [Fluent interface](http://en.wikipedia.org/wiki/Fluent_interface) style DSL for testing REST services.
+It served as the initial inspiration for the creation of this DSL.
+
+
+### Assumptions ###
+
+This document assumes a few things about your environment in order to simplify the examples.
+
+* The JVM is executable as simply java.
+* The Apache Knox Gateway is installed and functional.
+* The example commands are executed within the context of the GATEWAY_HOME current directory.
+The GATEWAY_HOME directory is the directory within the Apache Knox Gateway installation that contains the README file and the bin, conf and deployments directories.
+* A few examples require the use of commands from a standard Groovy installation.  These examples are optional but to try them you will need Groovy [installed](http://groovy.codehaus.org/Installing+Groovy).
+
+
+### Basics ###
+
+The DSL requires a shell to interpret the Groovy script.
+The shell can either be used interactively or to execute a script file.
+To simplify use, the distribution contains an embedded version of the Groovy shell.
+
+The shell can be run interactively.  Use the command `exit` to exit.
+
+    java -jar bin/shell.jar
+
+When running interactively it may be helpful to reduce some of the output generated by the shell console.
+Use the following command in the interactive shell to reduce that output.
+This only needs to be done once as these preferences are persisted.
+
+    set verbosity QUIET
+    set show-last-result false
+
+Also when running interactively use the `exit` command to terminate the shell.
+Using `^C` to exit can sometimes leaves the parent shell in a problematic state.
+
+The shell can also be used to execute a script by passing a single filename argument.
+
+    java -jar bin/shell.jar samples/ExampleWebHdfsPutGetFile.groovy
+
+
+### Examples ###
+
+Once the shell can be launched the DSL can be used to interact with the gateway and Hadoop.
+Below is a very simple example of an interactive shell session to upload a file to HDFS.
+
+    java -jar bin/shell.jar
+    knox:000> session = Hadoop.login( "https://localhost:8443/gateway/sandbox", "guest", "guest-password" )
+    knox:000> Hdfs.put( session ).file( "README" ).to( "/tmp/example/README" ).now()
+
+The `knox:000>` in the example above is the prompt from the embedded Groovy console.
+If you output doesn't look like this you may need to set the verbosity and show-last-result preferences as described above in the Usage section.
+
+If you relieve an error `HTTP/1.1 403 Forbidden` it may be because that file already exists.
+Try deleting it with the following command and then try again.
+
+    knox:000> Hdfs.rm(session).file("/tmp/example/README").now()
+
+Without using some other tool to browse HDFS it is hard to tell that that this command did anything.
+Execute this to get a bit more feedback.
+
+    knox:000> println "Status=" + Hdfs.put( session ).file( "README" ).to( "/tmp/example/README2" ).now().statusCode
+    Status=201
+
+Notice that a different filename is used for the destination.
+Without this an error would have resulted.
+Of course the DSL also provides a command to list the contents of a directory.
+
+    knox:000> println Hdfs.ls( session ).dir( "/tmp/example" ).now().string
+    {"FileStatuses":{"FileStatus":[{"accessTime":1363711366977,"blockSize":134217728,"group":"hdfs","length":19395,"modificationTime":1363711366977,"owner":"guest","pathSuffix":"README","permission":"644","replication":1,"type":"FILE"},{"accessTime":1363711375617,"blockSize":134217728,"group":"hdfs","length":19395,"modificationTime":1363711375617,"owner":"guest","pathSuffix":"README2","permission":"644","replication":1,"type":"FILE"}]}}
+
+It is a design decision of the DSL to not provide type safe classes for various request and response payloads.
+Doing so would provide an undesirable coupling between the DSL and the service implementation.
+It also would make adding new commands much more difficult.
+See the Groovy section below for a variety capabilities and tools for working with JSON and XML to make this easy.
+The example below shows the use of JsonSlurper and GPath to extract content from a JSON response.
+
+    knox:000> import groovy.json.JsonSlurper
+    knox:000> text = Hdfs.ls( session ).dir( "/tmp/example" ).now().string
+    knox:000> json = (new JsonSlurper()).parseText( text )
+    knox:000> println json.FileStatuses.FileStatus.pathSuffix
+    [README, README2]
+
+*In the future, "built-in" methods to slurp JSON and XML may be added to make this a bit easier.*
+*This would allow for this type if single line interaction.*
+
+    println Hdfs.ls(session).dir("/tmp").now().json().FileStatuses.FileStatus.pathSuffix
+
+Shell session should always be ended with shutting down the session.
+The examples above do not touch on it but the DSL supports the simple execution of commands asynchronously.
+The shutdown command attempts to ensures that all asynchronous commands have completed before existing the shell.
+
+    knox:000> session.shutdown()
+    knox:000> exit
+
+All of the commands above could have been combined into a script file and executed as a single line.
+
+    java -jar bin/shell.jar samples/ExampleWebHdfsPutGet.groovy
+
+This would be the content of that script.
+
+    import org.apache.hadoop.gateway.shell.Hadoop
+    import org.apache.hadoop.gateway.shell.hdfs.Hdfs
+    import groovy.json.JsonSlurper
+    
+    gateway = "https://localhost:8443/gateway/sandbox"
+    username = "guest"
+    password = "guest-password"
+    dataFile = "README"
+    
+    session = Hadoop.login( gateway, username, password )
+    Hdfs.rm( session ).file( "/tmp/example" ).recursive().now()
+    Hdfs.put( session ).file( dataFile ).to( "/tmp/example/README" ).now()
+    text = Hdfs.ls( session ).dir( "/tmp/example" ).now().string
+    json = (new JsonSlurper()).parseText( text )
+    println json.FileStatuses.FileStatus.pathSuffix
+    session.shutdown()
+    exit
+
+Notice the `Hdfs.rm` command.  This is included simply to ensure that the script can be rerun.
+Without this an error would result the second time it is run.
+
+
+### Futures ###
+
+The DSL supports the ability to invoke commands asynchronously via the later() invocation method.
+The object returned from the later() method is a java.util.concurrent.Future parametrized with the response type of the command.
+This is an example of how to asynchronously put a file to HDFS.
+
+    future = Hdfs.put(session).file("README").to("tmp/example/README").later()
+    println future.get().statusCode
+
+The future.get() method will block until the asynchronous command is complete.
+To illustrate the usefulness of this however multiple concurrent commands are required.
+
+    readmeFuture = Hdfs.put(session).file("README").to("tmp/example/README").later()
+    licenseFuture = Hdfs.put(session).file("LICENSE").to("tmp/example/LICENSE").later()
+    session.waitFor( readmeFuture, licenseFuture )
+    println readmeFuture.get().statusCode
+    println licenseFuture.get().statusCode
+
+The session.waitFor() method will wait for one or more asynchronous commands to complete.
+
+
+### Closures ###
+
+Futures alone only provide asynchronous invocation of the command.
+What if some processing should also occur asynchronously once the command is complete.
+Support for this is provided by closures.
+Closures are blocks of code that are passed into the later() invocation method.
+In Groovy these are contained within {} immediately after a method.
+These blocks of code are executed once the asynchronous command is complete.
+
+    Hdfs.put(session).file("README").to("tmp/example/README").later(){ println it.statusCode }
+
+In this example the put() command is executed on a separate thread and once complete the `println it.statusCode` block is executed on that thread.
+The it variable is automatically populated by Groovy and is a reference to the result that is returned from the future or now() method.
+The future example above can be rewritten to illustrate the use of closures.
+
+    readmeFuture = Hdfs.put(session).file("README").to("tmp/example/README").later() { println it.statusCode }
+    licenseFuture = Hdfs.put(session).file("LICENSE").to("tmp/example/LICENSE").later() { println it.statusCode }
+    session.waitFor( readmeFuture, licenseFuture )
+
+Again, the session.waitFor() method will wait for one or more asynchronous commands to complete.
+
+
+### Constructs ###
+
+In order to understand the DSL there are three primary constructs that need to be understood.
+
+
+#### Session ####
+
+This construct encapsulates the client side session state that will be shared between all command invocations.
+In particular it will simplify the management of any tokens that need to be presented with each command invocation.
+It also manages a thread pool that is used by all asynchronous commands which is why it is important to call one of the shutdown methods.
+
+The syntax associated with this is expected to change we expect that credentials will not need to be provided to the gateway.
+Rather it is expected that some form of access token will be used to initialize the session.
+
+
+#### Services ####
+
+Services are the primary extension point for adding new suites of commands.
+The current built in examples are: Hdfs, Job and Workflow.
+The desire for extensibility is the reason for the slightly awkward Hdfs.ls(session) syntax.
+Certainly something more like `session.hdfs().ls()` would have been preferred but this would prevent adding new commands easily.
+At a minimum it would result in extension commands with a different syntax from the "built-in" commands.
+
+The service objects essentially function as a factory for a suite of commands.
+
+
+#### Commands ####
+
+Commands provide the behavior of the DSL.
+They typically follow a Fluent interface style in order to allow for single line commands.
+There are really three parts to each command: Request, Invocation, Response
+
+
+#### Request ####
+
+The request is populated by all of the methods following the "verb" method and the "invoke" method.
+For example in `Hdfs.rm(session).ls(dir).now()` the request is populated between the "verb" method `rm()` and the "invoke" method `now()`.
+
+
+#### Invocation ####
+
+The invocation method controls how the request is invoked.
+Currently supported synchronous and asynchronous invocation.
+The now() method executes the request and returns the result immediately.
+The later() method submits the request to be executed later and returns a future from which the result can be retrieved.
+In addition later() invocation method can optionally be provided a closure to execute when the request is complete.
+See the Futures and Closures sections below for additional detail and examples.
+
+
+#### Response ####
+
+The response contains the results of the invocation of the request.
+In most cases the response is a thin wrapper over the HTTP response.
+In fact many commands will share a single BasicResponse type that only provides a few simple methods.
+
+    public int getStatusCode()
+    public long getContentLength()
+    public String getContentType()
+    public String getContentEncoding()
+    public InputStream getStream()
+    public String getString()
+    public byte[] getBytes()
+    public void close();
+
+Thanks to Groovy these methods can be accessed as attributes.
+In the some of the examples the staticCode was retrieved for example.
+
+    println Hdfs.put(session).rm(dir).now().statusCode
+
+Groovy will invoke the getStatusCode method to retrieve the statusCode attribute.
+
+The three methods getStream(), getBytes() and getString deserve special attention.
+Care must be taken that the HTTP body is fully read once and only once.
+Therefore one of these methods (and only one) must be called once and only once.
+Calling one of these more than once will cause an error.
+Failing to call one of these methods once will result in lingering open HTTP connections.
+The close() method may be used if the caller is not interested in reading the result body.
+Most commands that do not expect a response body will call close implicitly.
+If the body is retrieved via getBytes() or getString(), the close() method need not be called.
+When using getStream(), care must be taken to consume the entire body otherwise lingering open HTTP connections will result.
+The close() method may be called after reading the body partially to discard the remainder of the body.
+
+
+### Services ###
+
+The built-in supported client DLS for each Hadoop service can be found in the #[Service Details] section.
+
+
+### Extension ###
+
+Extensibility is a key design goal of the KnoxShell and client DSL.
+There are two ways to provide extended functionality for use with the shell.
+The first is to simply create Groovy scripts that use the DSL to perform a useful task.
+The second is to add new services and commands.
+In order to add new service and commands new classes must be written in either Groovy or Java and added to the classpath of the shell.
+Fortunately there is a very simple way to add classes and JARs to the shell classpath.
+The first time the shell is executed it will create a configuration file in the same directory as the JAR with the same base name and a `.cfg` extension.
+
+    bin/shell.jar
+    bin/shell.cfg
+
+That file contains both the main class for the shell as well as a definition of the classpath.
+Currently that file will by default contain the following.
+
+    main.class=org.apache.hadoop.gateway.shell.Shell
+    class.path=../lib; ../lib/*.jar; ../ext; ../ext/*.jar
+
+Therefore to extend the shell you should copy any new service and command class either to the `ext` directory or if they are packaged within a JAR copy the JAR to the `ext` directory.
+The `lib` directory is reserved for JARs that may be delivered with the product.
+
+Below are samples for the service and command classes that would need to be written to add new commands to the shell.
+These happen to be Groovy source files but could with very minor changes be Java files.
+The easiest way to add these to the shell is to compile them directory into the `ext` directory.
+*Note: This command depends upon having the Groovy compiler installed and available on the execution path.*
+
+    groovy -d ext -cp bin/shell.jar samples/SampleService.groovy \
+        samples/SampleSimpleCommand.groovy samples/SampleComplexCommand.groovy
+
+These source files are available in the samples directory of the distribution but these are included here for convenience.
+
+
+#### Sample Service (Groovy)
+
+    import org.apache.hadoop.gateway.shell.Hadoop
+
+    class SampleService {
+
+        static String PATH = "/webhdfs/v1"
+
+        static SimpleCommand simple( Hadoop session ) {
+            return new SimpleCommand( session )
+        }
+
+        static ComplexCommand.Request complex( Hadoop session ) {
+            return new ComplexCommand.Request( session )
+        }
+
+    }
+
+#### Sample Simple Command (Groovy)
+
+    import org.apache.hadoop.gateway.shell.AbstractRequest
+    import org.apache.hadoop.gateway.shell.BasicResponse
+    import org.apache.hadoop.gateway.shell.Hadoop
+    import org.apache.http.client.methods.HttpGet
+    import org.apache.http.client.utils.URIBuilder
+
+    import java.util.concurrent.Callable
+
+    class SimpleCommand extends AbstractRequest<BasicResponse> {
+
+        SimpleCommand( Hadoop session ) {
+            super( session )
+        }
+
+        private String param
+        SimpleCommand param( String param ) {
+            this.param = param
+            return this
+        }
+
+        @Override
+        protected Callable<BasicResponse> callable() {
+            return new Callable<BasicResponse>() {
+                @Override
+                BasicResponse call() {
+                    URIBuilder uri = uri( SampleService.PATH, param )
+                    addQueryParam( uri, "op", "LISTSTATUS" )
+                    HttpGet get = new HttpGet( uri.build() )
+                    return new BasicResponse( execute( get ) )
+                }
+            }
+        }
+
+    }
+
+
+#### Sample Complex Command (Groovy)
+
+    import com.jayway.jsonpath.JsonPath
+    import org.apache.hadoop.gateway.shell.AbstractRequest
+    import org.apache.hadoop.gateway.shell.BasicResponse
+    import org.apache.hadoop.gateway.shell.Hadoop
+    import org.apache.http.HttpResponse
+    import org.apache.http.client.methods.HttpGet
+    import org.apache.http.client.utils.URIBuilder
+
+    import java.util.concurrent.Callable
+
+    class ComplexCommand {
+
+        static class Request extends AbstractRequest<Response> {
+
+            Request( Hadoop session ) {
+                super( session )
+            }
+
+            private String param;
+            Request param( String param ) {
+                this.param = param;
+                return this;
+            }
+
+            @Override
+            protected Callable<Response> callable() {
+                return new Callable<Response>() {
+                    @Override
+                    Response call() {
+                        URIBuilder uri = uri( SampleService.PATH, param )
+                        addQueryParam( uri, "op", "LISTSTATUS" )
+                        HttpGet get = new HttpGet( uri.build() )
+                        return new Response( execute( get ) )
+                    }
+                }
+            }
+
+        }
+
+        static class Response extends BasicResponse {
+
+            Response(HttpResponse response) {
+                super(response)
+            }
+
+            public List<String> getNames() {
+                return JsonPath.read( string, "\$.FileStatuses.FileStatus[*].pathSuffix" )
+            }
+
+        }
+
+    }
+
+
+### Groovy
+
+The shell included in the distribution is basically an unmodified packaging of the Groovy shell.
+The distribution does however provide a wrapper that makes it very easy to setup the class path for the shell.
+In fact the JARs required to execute the DSL are included on the class path by default.
+Therefore these command are functionally equivalent if you have Groovy [installed][15].
+See below for a description of what is required for JARs required by the DSL from `lib` and `dep` directories.
+
+    java -jar bin/shell.jar samples/ExampleWebHdfsPutGet.groovy
+    groovy -classpath {JARs required by the DSL from lib and dep} samples/ExampleWebHdfsPutGet.groovy
+
+The interactive shell isn't exactly equivalent.
+However the only difference is that the shell.jar automatically executes some additional imports that are useful for the KnoxShell client DSL.
+So these two sets of commands should be functionality equivalent.
+*However there is currently a class loading issue that prevents the groovysh command from working properly.*
+
+    java -jar bin/shell.jar
+
+    groovysh -classpath {JARs required by the DSL from lib and dep}
+    import org.apache.hadoop.gateway.shell.Hadoop
+    import org.apache.hadoop.gateway.shell.hdfs.Hdfs
+    import org.apache.hadoop.gateway.shell.job.Job
+    import org.apache.hadoop.gateway.shell.workflow.Workflow
+    import java.util.concurrent.TimeUnit
+
+Alternatively, you can use the Groovy Console which does not appear to have the same class loading issue.
+
+    groovyConsole -classpath {JARs required by the DSL from lib and dep}
+
+    import org.apache.hadoop.gateway.shell.Hadoop
+    import org.apache.hadoop.gateway.shell.hdfs.Hdfs
+    import org.apache.hadoop.gateway.shell.job.Job
+    import org.apache.hadoop.gateway.shell.workflow.Workflow
+    import java.util.concurrent.TimeUnit
+
+The JARs currently required by the client DSL are
+
+    lib/gateway-shell-${gateway-version}.jar
+    dep/httpclient-4.2.3.jar
+    dep/httpcore-4.2.2.jar
+    dep/commons-lang3-3.1.jar
+    dep/commons-codec-1.7.jar
+
+So on Linux/MacOS you would need this command
+
+    groovy -cp lib/gateway-shell-0.2.0-SNAPSHOT.jar:dep/httpclient-4.2.3.jar:dep/httpcore-4.2.2.jar:dep/commons-lang3-3.1.jar:dep/commons-codec-1.7.jar samples/ExampleWebHdfsPutGet.groovy
+
+and on Windows you would need this command
+
+    groovy -cp lib/gateway-shell-0.2.0-SNAPSHOT.jar;dep/httpclient-4.2.3.jar;dep/httpcore-4.2.2.jar;dep/commons-lang3-3.1.jar;dep/commons-codec-1.7.jar samples/ExampleWebHdfsPutGet.groovy
+
+The exact list of required JARs is likely to change from release to release so it is recommended that you utilize the wrapper `bin/shell.jar`.
+
+In addition because the DSL can be used via standard Groovy, the Groovy integrations in many popular IDEs (e.g. IntelliJ , Eclipse) can also be used.
+This makes it particularly nice to develop and execute scripts to interact with Hadoop.
+The code-completion features in modern IDEs in particular provides immense value.
+All that is required is to add the shell-0.2.0.jar to the projects class path.
+
+There are a variety of Groovy tools that make it very easy to work with the standard interchange formats (i.e. JSON and XML).
+In Groovy the creation of XML or JSON is typically done via a "builder" and parsing done via a "slurper".
+In addition once JSON or XML is "slurped" the GPath, an XPath like feature build into Groovy can be used to access data.
+
+* XML
+    * Markup Builder [Overview]http://groovy.codehaus.org/Creating+XML+using+Groovy's+MarkupBuilder), [API](http://groovy.codehaus.org/api/groovy/xml/MarkupBuilder.html)
+    * XML Slurper [Overview]http://groovy.codehaus.org/Reading+XML+using+Groovy's+XmlSlurper), [API](http://groovy.codehaus.org/api/groovy/util/XmlSlurper.html)
+    * XPath [Overview]http://groovy.codehaus.org/GPath), [API](http://docs.oracle.com/javase/1.5.0/docs/api/javax/xml/xpath/XPath.html)
+* JSON
+    * JSON Builder [API](http://groovy.codehaus.org/gapi/groovy/json/JsonBuilder.html)
+    * JSON Slurper [API](http://groovy.codehaus.org/gapi/groovy/json/JsonSlurper.html)
+    * JSON Path [API](https://code.google.com/p/json-path/)
+    * GPath [Overview](http://groovy.codehaus.org/GPath)
+

Added: incubator/knox/trunk/books/0.4.0/book_gateway-details.md
URL: http://svn.apache.org/viewvc/incubator/knox/trunk/books/0.4.0/book_gateway-details.md?rev=1556393&view=auto
==============================================================================
--- incubator/knox/trunk/books/0.4.0/book_gateway-details.md (added)
+++ incubator/knox/trunk/books/0.4.0/book_gateway-details.md Tue Jan  7 22:45:36 2014
@@ -0,0 +1,59 @@
+<!---
+   Licensed to the Apache Software Foundation (ASF) under one or more
+   contributor license agreements.  See the NOTICE file distributed with
+   this work for additional information regarding copyright ownership.
+   The ASF licenses this file to You under the Apache License, Version 2.0
+   (the "License"); you may not use this file except in compliance with
+   the License.  You may obtain a copy of the License at
+
+       http://www.apache.org/licenses/LICENSE-2.0
+
+   Unless required by applicable law or agreed to in writing, software
+   distributed under the License is distributed on an "AS IS" BASIS,
+   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+   See the License for the specific language governing permissions and
+   limitations under the License.
+--->
+
+## Gateway Details ##
+
+TODO
+
+### URL Mapping ###
+
+The gateway functions much like a reverse proxy.
+As such it maintains a mapping of URLs that are exposed externally by the gateway to URLs that are provided by the Hadoop cluster.
+Examples of mappings for the WebHDFS, WebHCat, Oozie and Stargate/HBase are shown below.
+These mapping are generated from the combination of the gateway configuration file (i.e. `{GATEWAY_HOME}/conf/gateway-site.xml`) and the cluster topology descriptors (e.g. `{GATEWAY_HOME}/deployments/{cluster-name}.xml`).
+The port numbers show for the Cluster URLs represent the default ports for these services.
+The actual port number may be different for a given cluster.
+
+* WebHDFS
+    * Gateway: `https://{gateway-host}:{gateway-port}/{gateway-path}/{cluster-name}/webhdfs`
+    * Cluster: `http://{webhdfs-host}:50070/webhdfs`
+* WebHCat (Templeton)
+    * Gateway: `https://{gateway-host}:{gateway-port}/{gateway-path}/{cluster-name}/templeton`
+    * Cluster: `http://{webhcat-host}:50111/templeton}`
+* Oozie
+    * Gateway: `https://{gateway-host}:{gateway-port}/{gateway-path}/{cluster-name}/oozie`
+    * Cluster: `http://{oozie-host}:11000/oozie}`
+* Stargate (HBase)
+    * Gateway: `https://{gateway-host}:{gateway-port}/{gateway-path}/{cluster-name}/hbase`
+    * Cluster: `http://{hbase-host}:60080`
+
+The values for `{gateway-host}`, `{gateway-port}`, `{gateway-path}` are provided via the gateway configuration file (i.e. `{GATEWAY_HOME}/conf/gateway-site.xml`).
+
+The value for `{cluster-name}` is derived from the file name of the cluster topology descriptor (e.g. `{GATEWAY_HOME}/deployments/{cluster-name}.xml`).
+
+The value for `{webhdfs-host}`, `{webhcat-host}`, `{oozie-host}` and `{hbase-host}` are provided via the cluster topology descriptor (e.g. `{GATEWAY_HOME}/deployments/{cluster-name}.xml`).
+
+Note: The ports 50070, 50111, 11000 and 60080 are the defaults for WebHDFS, WebHCat, Oozie and Stargate/HBase respectively.
+Their values can also be provided via the cluster topology descriptor if your Hadoop cluster uses different ports.
+
+<<config.md>>
+<<config_authn.md>>
+<<config_ldap_group_lookup.md>>
+<<config_id_assertion.md>>
+<<config_authz.md>>
+<<config_kerberos.md>>
+

Added: incubator/knox/trunk/books/0.4.0/book_getting-started.md
URL: http://svn.apache.org/viewvc/incubator/knox/trunk/books/0.4.0/book_getting-started.md?rev=1556393&view=auto
==============================================================================
--- incubator/knox/trunk/books/0.4.0/book_getting-started.md (added)
+++ incubator/knox/trunk/books/0.4.0/book_getting-started.md Tue Jan  7 22:45:36 2014
@@ -0,0 +1,104 @@
+<!---
+   Licensed to the Apache Software Foundation (ASF) under one or more
+   contributor license agreements.  See the NOTICE file distributed with
+   this work for additional information regarding copyright ownership.
+   The ASF licenses this file to You under the Apache License, Version 2.0
+   (the "License"); you may not use this file except in compliance with
+   the License.  You may obtain a copy of the License at
+
+       http://www.apache.org/licenses/LICENSE-2.0
+
+   Unless required by applicable law or agreed to in writing, software
+   distributed under the License is distributed on an "AS IS" BASIS,
+   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+   See the License for the specific language governing permissions and
+   limitations under the License.
+--->
+
+## Apache Knox Details ##
+
+This section provides everything you need to know to get the Knox gateway up and running against a Hadoop cluster.
+
+#### Hadoop ####
+
+An an existing Hadoop 1.x or 2.x cluster is required for Knox sit in front of and protect.
+It is possible to use a Hadoop cluster deployed on EC2 but this will require additional configuration not covered here.
+It is also possible to use a limited set of services in Hadoop cluster secured with Kerberos.
+This too required additional configuration that is not described here.
+See #[Supported Services] for details on what is supported for this release.
+
+The Hadoop cluster should be ensured to have at least WebHDFS, WebHCat (i.e. Templeton) and Oozie configured, deployed and running.
+HBase/Stargate and Hive can also be accessed via the Knox Gateway given the proper versions and configuration.
+
+The instructions that follow assume a few things:
+
+1. The gateway is *not* collocated with the Hadoop clusters themselves.
+2. The host names and IP addresses of the cluster services are accessible by the gateway where ever it happens to be running.
+
+All of the instructions and samples provided here are tailored and tested to work "out of the box" against a [Hortonworks Sandbox 2.x VM][sandbox].
+
+
+#### Apache Knox Directory Layout ####
+
+Knox can be installed by expanding the zip file or with rpm. With rpm based install the following directories are created in addition to those described in
+this section.
+
+    /usr/lib/knox
+    /var/log/knox
+    /var/run/knox
+
+The directory `/usr/lib/knox` is considered your `{GATEWAY_HOME}` and will adhere to the layout described below.
+The directory `/var/log/knox` will contain the output files from the server.
+The directory `/var/run/knox` will contain the process ID for a currently running gateway server.
+
+
+Regardless of the installation method used the layout and content of the `{GATEWAY_HOME}` will be identical.
+The table below provides a brief explanation of the important files and directories within `{GATEWWAY_HOME}`
+
+| Directory     | Purpose |
+| ------------- | ------- |
+| conf/         | Contains configuration files that apply to the gateway globally (i.e. not cluster specific ).       |
+| bin/          | Contains the executable shell scripts, batch files and JARs for clients and servers.                |
+| deployments/  | Contains topology descriptors used to configure the gateway for specific Hadoop clusters.           |
+| lib/          | Contains the JARs for all the components that make up the gateway.                                  |
+| dep/          | Contains the JARs for all of the components upon which the gateway depends.                         |
+| ext/          | A directory where user supplied extension JARs can be placed to extends the gateways functionality. |
+| samples/      | Contains a number of samples that can be used to explore the functionality of the gateway.          |
+| templates/    | Contains default configuration files that can be copied and customized.                             |
+| README        | Provides basic information about the Apache Knox Gateway.                                           |
+| ISSUES        | Describes significant know issues.                                                                  |
+| CHANGES       | Enumerates the changes between releases.                                                            |
+| LICENSE       | Documents the license under which this software is provided.                                        |
+| NOTICE        | Documents required attribution notices for included dependencies.                                   |
+| DISCLAIMER    | Documents that this release is from a project undergoing incubation at Apache.                      |
+
+
+### Supported Services ###
+
+This table enumerates the versions of various Hadoop services that have been tested to work with the Knox Gateway.
+Only more recent versions of some Hadoop components when secured via Kerberos can be accessed via the Knox Gateway.
+
+| Service            | Version    | Non-Secure  | Secure |
+| ------------------ | ---------- | ----------- | ------ |
+| WebHDFS            | 2.1.0      | ![y]        | ![y]   |
+| WebHCat/Templeton  | 0.11.0     | ![y]        | ![n]   |
+|                    | 0.12.0     | ![y]        | ![y]   |
+| Ozzie              | 4.0.0      | ![y]        | ![y]   |
+| HBase/Stargate     | 0.95.2     | ![y]        | ![n]   |
+| Hive (via WebHCat) | 0.11.0     | ![y]        | ![n]   |
+|                    | 0.12.0     | ![y]        | ![y]   |
+| Hive (via JDBC)    | 0.11.0     | ![n]        | ![n]   |
+|                    | 0.12.0     | ![y]        | ![n]   |
+| Hive (via ODBC)    | 0.11.0     | ![n]        | ![n]   |
+|                    | 0.12.0     | ![n]        | ![n]   |
+
+
+### More Examples ###
+
+These examples provide more detail about how to access various Apache Hadoop services via the Apache Knox Gateway.
+
+* #[WebHDFS Examples]
+* #[WebHCat Examples]
+* #[Oozie Examples]
+* #[HBase Examples]
+* #[Hive Examples]

Added: incubator/knox/trunk/books/0.4.0/book_limitations.md
URL: http://svn.apache.org/viewvc/incubator/knox/trunk/books/0.4.0/book_limitations.md?rev=1556393&view=auto
==============================================================================
--- incubator/knox/trunk/books/0.4.0/book_limitations.md (added)
+++ incubator/knox/trunk/books/0.4.0/book_limitations.md Tue Jan  7 22:45:36 2014
@@ -0,0 +1,42 @@
+<!---
+   Licensed to the Apache Software Foundation (ASF) under one or more
+   contributor license agreements.  See the NOTICE file distributed with
+   this work for additional information regarding copyright ownership.
+   The ASF licenses this file to You under the Apache License, Version 2.0
+   (the "License"); you may not use this file except in compliance with
+   the License.  You may obtain a copy of the License at
+
+       http://www.apache.org/licenses/LICENSE-2.0
+
+   Unless required by applicable law or agreed to in writing, software
+   distributed under the License is distributed on an "AS IS" BASIS,
+   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+   See the License for the specific language governing permissions and
+   limitations under the License.
+--->
+
+## Limitations ##
+
+
+### Secure Oozie POST/PUT Request Payload Size Restriction ###
+
+With one exception there are no know size limits for requests or responses payloads that pass through the gateway.
+The exception involves POST or PUT request payload sizes for Oozie in a Kerberos secured Hadoop cluster.
+In this one case there is currently a 4Kb payload size limit for the first request made to the Hadoop cluster.
+This is a result of how the gateway negotiates a trust relationship between itself and the cluster via SPNego.
+There is an undocumented configuration setting to modify this limit's value if required.
+In the future this will be made more easily configuration and at that time it will be documented.
+
+
+### LDAP Groups Acquisition ###
+
+The LDAP authenticator currently does not "out of the box" support the acquisition of group information.
+This can be addressed by implementing a custom Shiro Realm extension.
+Building this into the default implementation is on the roadmap.
+
+
+### Group Membership Propagation ###
+
+Groups that are acquired via Identity Assertion Group Principal Mapping are not propigated to the Hadoop services.
+Therefore groups used for Service Level Authorization policy may not match those acquired within the cluster via GroupMappingServiceProvider plugins.
+

Added: incubator/knox/trunk/books/0.4.0/book_service-details.md
URL: http://svn.apache.org/viewvc/incubator/knox/trunk/books/0.4.0/book_service-details.md?rev=1556393&view=auto
==============================================================================
--- incubator/knox/trunk/books/0.4.0/book_service-details.md (added)
+++ incubator/knox/trunk/books/0.4.0/book_service-details.md Tue Jan  7 22:45:36 2014
@@ -0,0 +1,82 @@
+<!---
+   Licensed to the Apache Software Foundation (ASF) under one or more
+   contributor license agreements.  See the NOTICE file distributed with
+   this work for additional information regarding copyright ownership.
+   The ASF licenses this file to You under the Apache License, Version 2.0
+   (the "License"); you may not use this file except in compliance with
+   the License.  You may obtain a copy of the License at
+
+       http://www.apache.org/licenses/LICENSE-2.0
+
+   Unless required by applicable law or agreed to in writing, software
+   distributed under the License is distributed on an "AS IS" BASIS,
+   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+   See the License for the specific language governing permissions and
+   limitations under the License.
+--->
+
+## Service Details ##
+
+In the sections that follow the integrations currently available out of the box with the gateway will be described.
+In general these sections will include examples that demonstrate how to access each of these services via the gateway.
+In many cases this will include both the use of [cURL][curl] as a REST API client as well as the use of the Knox Client DSL.
+You may notice that there are some minor differences between using the REST API of a given service via the gateway.
+In general this is necessary in order to achieve the goal of leaking internal Hadoop cluster details to the client.
+
+Keep in mind that the gateway uses a plugin model for supporting Hadoop services.
+Check back with the [Apache Knox][site] site for the latest news on plugin availability.
+You can also create your own custom plugin to extend the capabilities of the gateway.
+
+These are the current Hadoop services with built-in support.
+
+* #[WebHDFS]
+* #[WebHCat]
+* #[Oozie]
+* #[HBase]
+* #[Hive]
+
+### Assumptions
+
+This document assumes a few things about your environment in order to simplify the examples.
+
+* The JVM is executable as simply java.
+* The Apache Knox Gateway is installed and functional.
+* The example commands are executed within the context of the GATEWAY_HOME current directory.
+The GATEWAY_HOME directory is the directory within the Apache Knox Gateway installation that contains the README file and the bin, conf and deployments directories.
+* The [cURL][curl] command line HTTP client utility is installed and functional.
+* A few examples optionally require the use of commands from a standard Groovy installation.
+These examples are optional but to try them you will need Groovy [installed](http://groovy.codehaus.org/Installing+Groovy).
+* The default configuration for all of the samples is setup for use with Hortonworks' [Sandbox][sandbox] version 2.
+
+### Customization
+
+Using these samples with other Hadoop installations will require changes to the steps describe here as well as changes to referenced sample scripts.
+This will also likely require changes to the gateway's default configuration.
+In particular host names, ports user names and password may need to be changes to match your environment.
+These changes may need to be made to gateway configuration and also the Groovy sample script files in the distribution.
+All of the values that may need to be customized in the sample scripts can be found together at the top of each of these files.
+
+### cURL
+
+The cURL HTTP client command line utility is used extensively in the examples for each service.
+In particular this form of the cURL command line is used repeatedly.
+
+    curl -i -k -u guest:guest-password ...
+
+The option -i (aka --include) is used to output HTTP response header information.
+This will be important when the content of the HTTP Location header is required for subsequent requests.
+
+The option -k (aka --insecure) is used to avoid any issues resulting the use of demonstration SSL certificates.
+
+The option -u (aka --user) is used to provide the credentials to be used when the client is challenged by the gateway.
+
+Keep in mind that the samples do not use the cookie features of cURL for the sake of simplicity.
+Therefore each request via cURL will result in an authentication.
+
+<<service_webhdfs.md>>
+<<service_webhcat.md>>
+<<service_oozie.md>>
+<<service_hbase.md>>
+<<service_hive.md>>
+
+

Added: incubator/knox/trunk/books/0.4.0/book_troubleshooting.md
URL: http://svn.apache.org/viewvc/incubator/knox/trunk/books/0.4.0/book_troubleshooting.md?rev=1556393&view=auto
==============================================================================
--- incubator/knox/trunk/books/0.4.0/book_troubleshooting.md (added)
+++ incubator/knox/trunk/books/0.4.0/book_troubleshooting.md Tue Jan  7 22:45:36 2014
@@ -0,0 +1,304 @@
+<!---
+   Licensed to the Apache Software Foundation (ASF) under one or more
+   contributor license agreements.  See the NOTICE file distributed with
+   this work for additional information regarding copyright ownership.
+   The ASF licenses this file to You under the Apache License, Version 2.0
+   (the "License"); you may not use this file except in compliance with
+   the License.  You may obtain a copy of the License at
+
+       http://www.apache.org/licenses/LICENSE-2.0
+
+   Unless required by applicable law or agreed to in writing, software
+   distributed under the License is distributed on an "AS IS" BASIS,
+   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+   See the License for the specific language governing permissions and
+   limitations under the License.
+--->
+
+## Troubleshooting ##
+
+### Finding Logs ###
+
+When things aren't working the first thing you need to do is examine the diagnostic logs.
+Depending upon how you are running the gateway these diagnostic logs will be output to different locations.
+
+#### java -jar bin/gateway.jar ####
+
+When the gateway is run this way the diagnostic output is written directly to the console.
+If you want to capture that output you will need to redirect the console output to a file using OS specific techniques.
+
+    java -jar bin/gateway.jar > gateway.log
+
+#### bin/gateway.sh start ####
+
+When the gateway is run this way the diagnostic output is written to /var/log/knox/knox.out and /var/log/knox/knox.err.
+Typically only knox.out will have content.
+
+
+### Increasing Logging ###
+
+The `log4j.properties` files `{GATEWAY_HOME}/conf` can be used to change the granularity of the logging done by Knox.
+The Knox server must be restarted in order for these changes to take effect.
+There are various useful loggers pre-populated but commented out.
+
+    log4j.logger.org.apache.hadoop.gateway=DEBUG # Use this logger to increase the debugging of Apache Knox itself.
+    log4j.logger.org.apache.shiro=DEBUG          # Use this logger to increase the debugging of Apache Shiro.
+    log4j.logger.org.apache.http=DEBUG           # Use this logger to increase the debugging of Apache HTTP components.
+    log4j.logger.org.apache.http.client=DEBUG    # Use this logger to increase the debugging of Apache HTTP client component.
+    log4j.logger.org.apache.http.headers=DEBUG   # Use this logger to increase the debugging of Apache HTTP header.
+    log4j.logger.org.apache.http.wire=DEBUG      # Use this logger to increase the debugging of Apache HTTP wire traffic.
+
+
+### LDAP Server Connectivity Issues ###
+
+If the gateway cannot contact the configured LDAP server you will see errors in the gateway diagnostic output.
+
+    13/11/15 16:30:17 DEBUG authc.BasicHttpAuthenticationFilter: Attempting to execute login with headers [Basic Z3Vlc3Q6Z3Vlc3QtcGFzc3dvcmQ=]
+    13/11/15 16:30:17 DEBUG ldap.JndiLdapRealm: Authenticating user 'guest' through LDAP
+    13/11/15 16:30:17 DEBUG ldap.JndiLdapContextFactory: Initializing LDAP context using URL 	[ldap://localhost:33389] and principal [uid=guest,ou=people,dc=hadoop,dc=apache,dc=org] with pooling disabled
+    13/11/15 16:30:17 DEBUG servlet.SimpleCookie: Added HttpServletResponse Cookie [rememberMe=deleteMe; Path=/gateway/vaultservice; Max-Age=0; Expires=Thu, 14-Nov-2013 21:30:17 GMT]
+    13/11/15 16:30:17 DEBUG authc.BasicHttpAuthenticationFilter: Authentication required: sending 401 Authentication challenge response.
+
+The client should see something along the lines of:
+
+    HTTP/1.1 401 Unauthorized
+    WWW-Authenticate: BASIC realm="application"
+    Content-Length: 0
+    Server: Jetty(8.1.12.v20130726)
+
+Resolving this will require ensuring that the LDAP server is running and that connection information is correct.
+The LDAP server connection information is configured in the cluster's topology file (e.g. {GATEWAY_HOME}/deployments/sandbox.xml).
+
+
+### Hadoop Cluster Connectivity Issues ###
+
+If the gateway cannot contact one of the services in the configured Hadoop cluster you will see errors in the gateway diagnostic output.
+
+    13/11/18 18:49:45 WARN hadoop.gateway: Connection exception dispatching request: http://localhost:50070/webhdfs/v1/?user.name=guest&op=LISTSTATUS org.apache.http.conn.HttpHostConnectException: Connection to http://localhost:50070 refused
+    org.apache.http.conn.HttpHostConnectException: Connection to http://localhost:50070 refused
+    	at org.apache.http.impl.conn.DefaultClientConnectionOperator.openConnection(DefaultClientConnectionOperator.java:190)
+    	at org.apache.http.impl.conn.ManagedClientConnectionImpl.open(ManagedClientConnectionImpl.java:294)
+    	at org.apache.http.impl.client.DefaultRequestDirector.tryConnect(DefaultRequestDirector.java:645)
+    	at org.apache.http.impl.client.DefaultRequestDirector.execute(DefaultRequestDirector.java:480)
+    	at org.apache.http.impl.client.AbstractHttpClient.execute(AbstractHttpClient.java:906)
+    	at org.apache.http.impl.client.AbstractHttpClient.execute(AbstractHttpClient.java:805)
+    	at org.apache.http.impl.client.AbstractHttpClient.execute(AbstractHttpClient.java:784)
+    	at org.apache.hadoop.gateway.dispatch.HttpClientDispatch.executeRequest(HttpClientDispatch.java:99)
+
+The the resulting behavior on the client will differ by client.
+For the client DSL executing the {GATEWAY_HOME}/samples/ExampleWebHdfsLs.groovy the output will look look like this.
+
+    Caught: org.apache.hadoop.gateway.shell.HadoopException: org.apache.hadoop.gateway.shell.ErrorResponse: HTTP/1.1 500 Server Error
+    org.apache.hadoop.gateway.shell.HadoopException: org.apache.hadoop.gateway.shell.ErrorResponse: HTTP/1.1 500 Server Error
+      at org.apache.hadoop.gateway.shell.AbstractRequest.now(AbstractRequest.java:72)
+      at org.apache.hadoop.gateway.shell.AbstractRequest$now.call(Unknown Source)
+      at ExampleWebHdfsLs.run(ExampleWebHdfsLs.groovy:28)
+
+When executing commands requests via cURL the output might look similar to the following example.
+
+    Set-Cookie: JSESSIONID=16xwhpuxjr8251ufg22f8pqo85;Path=/gateway/sandbox;Secure
+    Content-Type: text/html;charset=ISO-8859-1
+    Cache-Control: must-revalidate,no-cache,no-store
+    Content-Length: 21856
+    Server: Jetty(8.1.12.v20130726)
+
+    <html>
+    <head>
+    <meta http-equiv="Content-Type" content="text/html; charset=ISO-8859-1"/>
+    <title>Error 500 Server Error</title>
+    </head>
+    <body><h2>HTTP ERROR 500</h2>
+
+Resolving this will require ensuring that the Hadoop services are running and that connection information is correct.
+Basic Hadoop connectivity can be evaluated using cURL as described elsewhere.
+Otherwise the Hadoop cluster connection information is configured in the cluster's topology file (e.g. {GATEWAY_HOME}/deployments/sandbox.xml).
+
+
+### Check Hadoop Cluster Access via cURL ###
+
+When you are experiencing connectivity issue it can be helpful to "bypass" the gateway and invoke the Hadoop REST APIs directly.
+This can easily be done using the cURL command line utility or many other REST/HTTP clients.
+Exactly how to use cURL depends on the configuration of your Hadoop cluster.
+In general however you will use a command line the one that follows.
+
+    curl -ikv -X GET 'http://namenode-host:50070/webhdfs/v1/?op=LISTSTATUS'
+
+If you are using Sandbox the WebHDFS or NameNode port will be mapped to localhost so this command can be used.
+
+    curl -ikv -X GET 'http://localhost:50070/webhdfs/v1/?op=LISTSTATUS'
+
+If you are using a cluster secured with Kerberos you will need to have used `kinit` to authenticate to the KDC.
+Then the command below should verify that WebHDFS in the Hadoop cluster is accessible.
+
+    curl -ikv --negotiate -u : -X 'http://localhost:50070/webhdfs/v1/?op=LISTSTATUS'
+
+
+### Authentication Issues ###
+The following log information is available when you enable debug level logging for shiro. This can be done within the conf/log4j.properties file. Not the "Password not correct for user" message.
+
+    13/11/15 16:37:15 DEBUG authc.BasicHttpAuthenticationFilter: Attempting to execute login with headers [Basic Z3Vlc3Q6Z3Vlc3QtcGFzc3dvcmQw]
+    13/11/15 16:37:15 DEBUG ldap.JndiLdapRealm: Authenticating user 'guest' through LDAP
+    13/11/15 16:37:15 DEBUG ldap.JndiLdapContextFactory: Initializing LDAP context using URL [ldap://localhost:33389] and principal [uid=guest,ou=people,dc=hadoop,dc=apache,dc=org] with pooling disabled
+    2013-11-15 16:37:15,899 INFO  Password not correct for user 'uid=guest,ou=people,dc=hadoop,dc=apache,dc=org'
+    2013-11-15 16:37:15,899 INFO  Authenticator org.apache.directory.server.core.authn.SimpleAuthenticator@354c78e3 failed to authenticate: BindContext for DN 'uid=guest,ou=people,dc=hadoop,dc=apache,dc=org', credentials <0x67 0x75 0x65 0x73 0x74 0x2D 0x70 0x61 0x73 0x73 0x77 0x6F 0x72 0x64 0x30 >
+    2013-11-15 16:37:15,899 INFO  Cannot bind to the server
+    13/11/15 16:37:15 DEBUG servlet.SimpleCookie: Added HttpServletResponse Cookie [rememberMe=deleteMe; Path=/gateway/vaultservice; Max-Age=0; Expires=Thu, 14-Nov-2013 21:37:15 GMT]
+    13/11/15 16:37:15 DEBUG authc.BasicHttpAuthenticationFilter: Authentication required: sending 401 Authentication challenge response.
+
+The client will likely see something along the lines of:
+
+    HTTP/1.1 401 Unauthorized
+    WWW-Authenticate: BASIC realm="application"
+    Content-Length: 0
+    Server: Jetty(8.1.12.v20130726)
+
+#### Using ldapsearch to verify ldap connectivtiy and credentials
+
+If your authentication to knox fails and you believe your are using correct creedentilas, you could try to verify the connectivity and credentials usong ldapsearch, assuming you are using ldap directory for authentication.
+
+Assuming you are using the default values that came out of box with knox, your ldapsearch command would be like the following
+
+<pre>
+ldapsearch -h localhost -p 33389 -D "uid=guest,ou=people,dc=hadoop,dc=apache,dc=org" -w guest-password -b "uid=guest,ou=people,dc=hadoop,dc=apache,dc=org" "objectclass=*"
+
+This should produce output like the following
+
+# extended LDIF
+
+LDAPv3
+base <uid=guest,ou=people,dc=hadoop,dc=apache,dc=org> with scope subtree
+filter: objectclass=*
+requesting: ALL
+
+
+# guest, people, hadoop.apache.org
+dn: uid=guest,ou=people,dc=hadoop,dc=apache,dc=org
+objectClass: organizationalPerson
+objectClass: person
+objectClass: inetOrgPerson
+objectClass: top
+uid: guest
+cn: Guest
+sn: User
+userpassword:: Z3Vlc3QtcGFzc3dvcmQ=
+
+# search result
+search: 2
+result: 0 Success
+
+# numResponses: 2
+# numEntries: 1
+
+In a more general form the ldapsearch command would be
+
+ldapsearch -h {HOST} -p {PORT} -D {DN of binding user} -w {bind password} -b {DN of binding user} "objectclass=*}
+
+
+### Hostname Resolution Issues ###
+
+The deployments/sandbox.xml topology file has the host mapping feature enabled.
+This is required due to the way networking is setup in the Sandbox VM.
+Specifically the VM's internal hostname is sandbox.hortonworks.com.
+Since this hostname cannot be resolved to the actual VM Knox needs to map that hostname to something resolvable.
+
+If for example host mapping is disabled but the Sandbox VM is still used you will see an error in the diagnostic output similar to the below.
+
+    13/11/18 19:11:35 WARN hadoop.gateway: Connection exception dispatching request: http://sandbox.hortonworks.com:50075/webhdfs/v1/user/guest/example/README?op=CREATE&namenoderpcaddress=sandbox.hortonworks.com:8020&user.name=guest&overwrite=false java.net.UnknownHostException: sandbox.hortonworks.com
+    java.net.UnknownHostException: sandbox.hortonworks.com
+    	at java.net.Inet6AddressImpl.lookupAllHostAddr(Native Method)
+
+On the other hand if you are migrating from the Sandbox based configuration to a cluster you have deployment you may see a similar error.
+However in this case you may need to disable host mapping.
+This can be done by modifying the topology file (e.g. deployments/sandbox.xml) for the cluster.
+
+    ...
+    <provider>
+        <role>hostmap</role>
+        <name>static</name>
+        <enabled>false</enabled>
+        <param><name>localhost</name><value>sandbox,sandbox.hortonworks.com</value></param>
+    </provider>
+    ....
+
+
+### Job Submission Issues - HDFS Home Directories ###
+
+If you see error like the following in your console  while submitting a Job using groovy shell, it is likely that the authenticated user does not have a home directory on HDFS.
+
+<pre><code>
+Caught: org.apache.hadoop.gateway.shell.HadoopException: org.apache.hadoop.gateway.shell.ErrorResponse: HTTP/1.1 403 Forbidden
+org.apache.hadoop.gateway.shell.HadoopException: org.apache.hadoop.gateway.shell.ErrorResponse: HTTP/1.1 403 Forbidden
+</code></pre>
+
+You would also see this error if you try file operation on the home directory of the authenticating user.
+
+The error would look a little different as shown below  if you are attempting to the operation with cURL.
+
+<pre><code>
+{"RemoteException":{"exception":"AccessControlException","javaClassName":"org.apache.hadoop.security.AccessControlException","message":"Permission denied: user=tom, access=WRITE, inode=\"/user\":hdfs:hdfs:drwxr-xr-x"}}* 
+</code></pre>
+
+#### Resolution
+
+Create the home directory for the user on HDFS.
+The home directory is typically of the form `/user/{userid}` and should be owned by the user.
+user 'hdfs' can create such a directory and make the user owner of the directory.
+
+
+### Job Submission Issues - OS Accounts ###
+
+If the hadoop cluster is not secured with Kerberos, the user submitting a job need not have an OS account on the hadoop nodemanagers.
+
+If the hadoop cluster is secured with Kerberos, the user submitting the job should have an OS account on hadoop nodemanagers.
+
+In either case if the user does not have such OS account, his file permissions are based on user ownership of files or "other" permission in "ugo" posix permission.
+The user does not get any file permission as a member of any group if you are using default hadoop.security.group.mapping.
+
+TODO: add sample error message from running test on secure cluster with missing OS account
+
+### HBase Issues ###
+
+If you experience problems running the HBase samples with the Sandbox VM it may be necessary to restart HBase and Stargate.
+This can sometimes occur with the Sandbox VM is restarted from a saved state.
+If the client hangs after emitting the last line in the sample output below you are most likely affected.
+
+    System version : {...}
+    Cluster version : 0.96.0.2.0.6.0-76-hadoop2
+    Status : {...}
+    Creating table 'test_table'...
+
+HBase and Starget can be restred using the following commands on the Hadoop Sandbox VM.
+You will need to ssh into the VM in order to run these commands.
+
+    sudo -u hbase /usr/lib/hbase/bin/hbase-daemon.sh stop master
+    sudo -u hbase /usr/lib/hbase/bin/hbase-daemon.sh start master
+    sudo -u hbase /usr/lib/hbase/bin/hbase-daemon.sh restart rest -p 60080
+
+
+### SSL Certificate Issues ###
+
+Clients that do not trust the certificate presented by the server will behave in different ways.
+A browser will typically warn you of the inability to trust the receieved certificate and give you an opportunity to add an exception for the particular certificate.
+Curl will present you with the follow message and instructions for turning of certificate verification:
+
+	curl performs SSL certificate verification by default, using a "bundle" 
+	 of Certificate Authority (CA) public keys (CA certs). If the default
+	 bundle file isn't adequate, you can specify an alternate file
+	 using the --cacert option.
+	If this HTTPS server uses a certificate signed by a CA represented 
+	 the bundle, the certificate verification probably failed due to a
+	 problem with the certificate (it might be expired, or the name might
+	 not match the domain name in the URL).
+	If you'd like to turn off curl's verification of the certificate, use
+	 the -k (or --insecure) option.
+
+
+### Filing Bugs ###
+
+Bugs can be filed using [Jira][jira].
+Please include the results of this command below in the Environment section.
+Also include the version of Hadoop being used in the same section.
+
+    cd {GATEWAY_HOME}
+    java -jar bin/gateway.jar -version
+



Mime
View raw message