knox-commits mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From kmin...@apache.org
Subject [2/2] git commit: Site and release doc updates in prep for next 0.2.0 RC
Date Wed, 20 Mar 2013 16:02:37 GMT
Site and release doc updates in prep for next 0.2.0 RC


Project: http://git-wip-us.apache.org/repos/asf/incubator-knox/repo
Commit: http://git-wip-us.apache.org/repos/asf/incubator-knox/commit/f162485a
Tree: http://git-wip-us.apache.org/repos/asf/incubator-knox/tree/f162485a
Diff: http://git-wip-us.apache.org/repos/asf/incubator-knox/diff/f162485a

Branch: refs/heads/master
Commit: f162485afe9f2fda7c727b084174c18b5a767d50
Parents: d09ef25
Author: Kevin Minder <kevin.minder@hortonworks.com>
Authored: Wed Mar 20 12:02:31 2013 -0400
Committer: Kevin Minder <kevin.minder@hortonworks.com>
Committed: Wed Mar 20 12:02:31 2013 -0400

----------------------------------------------------------------------
 gateway-release/CHANGES                            |   11 +
 gateway-release/INSTALL                            |  252 +++++++++
 gateway-release/ISSUES                             |   11 +
 gateway-release/README                             |  399 +--------------
 gateway-release/readme.md                          |  355 -------------
 gateway-site/pom.xml                               |    6 +
 gateway-site/src/site/markdown/client.md.vm        |   78 ++--
 gateway-site/src/site/markdown/examples.md.vm      |  155 ++++++
 .../src/site/markdown/getting-started.md.vm        |  353 +++++++++++++
 gateway-site/src/site/markdown/readme-0-2-0.md     |   35 --
 gateway-site/src/site/markdown/release-0-2-0.md    |   35 ++
 gateway-site/src/site/markdown/setup.html          |   34 --
 gateway-site/src/site/markdown/site-process.md     |    8 +
 gateway-site/src/site/markdown/template.md         |    6 +-
 gateway-site/src/site/site.xml                     |    5 +-
 pom.xml                                            |    5 -
 16 files changed, 901 insertions(+), 847 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/incubator-knox/blob/f162485a/gateway-release/CHANGES
----------------------------------------------------------------------
diff --git a/gateway-release/CHANGES b/gateway-release/CHANGES
new file mode 100644
index 0000000..40a6ddc
--- /dev/null
+++ b/gateway-release/CHANGES
@@ -0,0 +1,11 @@
+------------------------------------------------------------------------------
+Changes v0.1.0 - v0.2.0
+------------------------------------------------------------------------------
+HTTPS Support (Client side)
+Oozie Support
+Protected DataNode URL query strings
+Pluggable Identity Asserters
+Principal Mapping
+URL Rewriting Enhancements
+KnoxShell Client DSL
+

http://git-wip-us.apache.org/repos/asf/incubator-knox/blob/f162485a/gateway-release/INSTALL
----------------------------------------------------------------------
diff --git a/gateway-release/INSTALL b/gateway-release/INSTALL
new file mode 100644
index 0000000..f320165
--- /dev/null
+++ b/gateway-release/INSTALL
@@ -0,0 +1,252 @@
+------------------------------------------------------------------------------
+Requirements
+------------------------------------------------------------------------------
+Java:
+  Java 1.6 or later
+
+Hadoop Cluster:
+  A local installation of a Hadoop Cluster is required at this time.  Hadoop
+  EC2 cluster and/or Sandbox installations are currently difficult to access
+  remotely via the Gateway. The EC2 and Sandbox limitation is caused by
+  Hadoop services running with internal IP addresses.  For the Gateway to work
+  in these cases it will need to be deployed on the EC2 cluster or Sandbox, at
+  this time.
+
+  The instructions that follow assume that the Gateway is *not* collocated
+  with the Hadoop clusters themselves and (most importantly) that the
+  hostnames and IP addresses of the cluster services are accessible by the
+  gateway where ever it happens to be running.
+
+  The Hadoop cluster should be ensured to have WebHDFS, WebHCat
+  (i.e. Templeton) and Oozie configured, deployed and running.
+
+------------------------------------------------------------------------------
+Installation and Deployment Instructions
+------------------------------------------------------------------------------
+1. Install
+     Download and extract the gateway-0.2.0-SNAPSHOT.zip file into the
+     installation directory that will contain your GATEWAY_HOME
+       jar xf gateway-0.2.0-SNAPSHOT.zip
+     This will create a directory 'gateway' in your current directory.
+
+2. Enter Gateway Home directory
+     cd gateway
+   The fully qualified name of this directory will be referenced as
+   {GATEWAY_HOME} throughout the remainder of this document.
+
+3. Start the demo LDAP server (ApacheDS)
+   a. First, understand that the LDAP server provided here is for demonstration
+      purposes. You may configure the LDAP specifics within the topology
+      descriptor for the cluster as described in step 5 below, in order to
+      customize what LDAP instance to use. The assumption is that most users
+      will leverage the demo LDAP server while evaluating this release and
+      should therefore continue with the instructions here in step 3.
+   b. Edit {GATEWAY_HOME}/conf/users.ldif if required and add your users and
+      groups to the file.  A number of normal Hadoop users
+      (e.g. hdfs, mapred, hcat, hive) have already been included.  Note that
+      the passwords in this file are "fictitious" and have nothing to do with
+      the actual accounts on the Hadoop cluster you are using.  There is also
+      a copy of this file in the templates directory that you can use to start
+      over if necessary.
+   c. Start the LDAP server - pointing it to the config dir where it will find
+      the users.ldif file in the conf directory.
+        java -jar bin/ldap-0.2.0-SNAPSHOT.jar conf &
+      There are a number of log messages of the form "Created null." that can
+      safely be ignored.  Take note of the port on which it was started as this
+      needs to match later configuration.  This will create a directory named
+      'org.apache.hadoop.gateway.security.EmbeddedApacheDirectoryServer' that
+      can safely be ignored.
+
+4. Start the Gateway server
+     java -jar bin/gateway-server-0.2.0-SNAPSHOT.jar
+   a. Take note of the port identified in the logging output as you will need this for
+      accessing the gateway.
+   b. The server will prompt you for the master secret (password). This secret is used
+      to secure artifacts used to secure artifacts used by the gateway server for
+      things like SSL, credential/password aliasing. This secret will have to be entered
+      at startup unless you choose to persist it. Remember this secret and keep it safe.
+      It represents the keys to the kingdom. See the Persisting the Master section for
+      more information.
+
+5. Configure the Gateway with the topology of your Hadoop cluster
+   a. Edit the file {GATEWAY_HOME}/deployments/sample.xml
+   b. Change the host and port in the urls of the <service> elements for
+      NAMENODE, TEMPLETON and OOZIE services to match your Hadoop cluster
+      deployment.
+   c. The default configuration contains the LDAP URL for a LDAP server.  By
+      default that file is configured to access the demo ApacheDS based LDAP
+      server and its default configuration. By default, this server listens on
+      port 33389.  Optionally, you can change the LDAP URL for the LDAP server
+      to be used for authentication.  This is set via the
+      main.ldapRealm.contextFactory.url property in the
+      <gateway><provider><authentication> section.
+   d. Save the file.  The directory {GATEWAY_HOME}/deployments is monitored
+      by the Gateway server and reacts to the discovery of a new or changed
+      cluster topology descriptor by provisioning the endpoints and required
+      filter chains to serve the needs of each cluster as described by the
+      topology file.  Note that the name of the file excluding the extension
+      is also used as the path for that cluster in the URL.  So for example
+      the sample.xml file will result in Gateway URLs of the form
+        http://{gateway-host}:{gateway-port}/gateway/sample/namenode/api/v1
+
+6. Test the installation and configuration of your Gateway
+   Invoke the LISTSATUS operation on HDFS represented by your configured
+   NAMENODE by using your web browser or curl:
+
+   curl -i -k -u hdfs:hdfs-password -X GET \
+     'https://localhost:8443/gateway/sample/namenode/api/v1/?op=LISTSTATUS'
+
+   The results of the above command should result in something to along the
+   lines of the output below.  The exact information returned is subject to
+   the content within HDFS in your Hadoop cluster.
+
+     HTTP/1.1 200 OK
+       Content-Type: application/json
+       Content-Length: 760
+       Server: Jetty(6.1.26)
+
+     {"FileStatuses":{"FileStatus":[
+     {"accessTime":0,"blockSize":0,"group":"hdfs","length":0,"modificationTime":1350595859762,"owner":"hdfs","pathSuffix":"apps","permission":"755","replication":0,"type":"DIRECTORY"},
+     {"accessTime":0,"blockSize":0,"group":"mapred","length":0,"modificationTime":1350595874024,"owner":"mapred","pathSuffix":"mapred","permission":"755","replication":0,"type":"DIRECTORY"},
+     {"accessTime":0,"blockSize":0,"group":"hdfs","length":0,"modificationTime":1350596040075,"owner":"hdfs","pathSuffix":"tmp","permission":"777","replication":0,"type":"DIRECTORY"},
+     {"accessTime":0,"blockSize":0,"group":"hdfs","length":0,"modificationTime":1350595857178,"owner":"hdfs","pathSuffix":"user","permission":"755","replication":0,"type":"DIRECTORY"}
+     ]}}
+
+   For additional information on WebHDFS, Templeton/WebHCat and Oozie
+   REST APIs, see the following URLs respectively:
+
+   http://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-hdfs/WebHDFS.html
+   http://people.apache.org/~thejas/templeton_doc_v1/
+   http://oozie.apache.org/docs/3.3.1/WebServicesAPI.html
+
+------------------------------------------------------------------------------
+Persisting the Master
+------------------------------------------------------------------------------
+The master secret is required to start the server. This secret is used to access secured artifacts by the gateway
+instance. Keystore, trust stores and credential stores are all protected with the master secret.
+
+You may persist the master secret by supplying the *-persist-master* switch at startup. This will result in a
+warning indicating that persisting the secret is less secure than providing it at startup. We do make some provisions in
+order to protect the persisted password.
+
+It is encrypted with AES 128 bit encryption and where possible the file permissions are set to only be accessible by
+the user that the gateway is running as.
+
+After persisting the secret, ensure that the file at config/security/master has the appropriate permissions set for your
+environment. This is probably the most important layer of defense for master secret. Do not assume that the encryption if
+sufficient protection.
+
+A specific user should be created to run the gateway this will protect a persisted master file.
+
+------------------------------------------------------------------------------
+Management of Security Artifacts
+------------------------------------------------------------------------------
+There are a number of artifacts that are used by the gateway in ensuring the security of wire level communications,
+access to protected resources and the encryption of sensitive data. These artifacts can be managed from outside of
+the gateway instances or generated and populated by the gateway instance itself.
+
+The following is a description of how this is coordinated with both standalone (development, demo, etc) gateway
+instances and instances as part of a cluster of gateways in mind.
+
+Upon start of the gateway server we:
+
+1. Look for an identity store at conf/security/keystores/gateway.jks. The identity store contains the certificate
+   and private key used to represent the identity of the server for SSL connections and signtature creation.
+	a. If there is no identity store we create one and generate a self-signed certificate for use in standalone/demo
+   	   mode. The certificate is stored with an alias of gateway-identity.
+   	b. If there is an identity store found than we ensure that it can be loaded using the provided master secret and
+   	   that there is an alias with called gateway-identity.
+2. Look for a credential store at conf/security/keystores/__gateway-credentials.jceks. This credential store is used
+   to store secrets/passwords that are used by the gateway. For instance, this is where the passphrase for accessing
+   the gateway-identity certificate is kept.
+   a. If there is no credential store found then we create one and populate it with a generated passphrase for the alias
+      gateway-identity-passphrase. This is coordinated with the population of the self-signed cert into the identity-store.
+   b. If a credential store is found then we ensure that it can be loaded using the provided master secret and that the
+      expected aliases have been populated with secrets.
+
+Upon deployment of a Hadoop cluster topology within the gateway we:
+
+1. Look for a credential store for the topology. For instance, we have a sample topology that gets deployed out of the box.
+   We look for conf/security/keystores/sample-credentials.jceks. This topology specific credential store is used for storing
+   secrets/passwords that are used for encrypting sensitive data with topology specific keys.
+   a. If no credential store is found for the topology being deployed then one is created for it. Population of the aliases
+      is delegated to the configured providers within the system that will require the use of a secret for a particular
+      task. They may programmatically set the value of the secret or choose to have the value for the specified alias
+      generated through the AliasService..
+   b. If a credential store is found then we ensure that it can be loaded with the provided master secret and the confgured
+      providers have the opportunity to ensure that the aliases are populated and if not to populate them.
+
+ By leveraging the algorithm described above we can provide a window of opportunity for management of these artifacts in a
+ number of ways.
+
+ 1. Using a single gateway instance as a master instance the artifacts can be generated or placed into the expected location
+    and then replicated across all of the slave instances before startup.
+ 2. Using an NFS mount as a central location for the artifacts would provide a single source of truth without the need to
+    replicate them over the network. Of course, NFS mounts have their own challenges.
+
+Summary of Secrets to be Managed:
+
+1. Master secret - the same for all gateway instances in a cluster of gateways
+2. All security related artifacts are protected with the master secret
+3. Secrets used by the gateway itself are stored within the gateway credential store and are the same across all gateway
+   instances in the cluster of gateways
+4. Secrets used by providers within cluster topologies are stored in topology specific credential stores and are the same
+   for the same topology across the cluster of gateway instances. However, they are specific to the topology - so secrets
+   for one hadoop cluster are different from those of another. This allows for failover from one gateway instance to another
+   even when encryption is being used while not allowing the compromise of one encryption key to expose the data for all clusters.
+
+NOTE: the SSL certificate will need special consideration depending on the type of certificate. Wildcard certs may be able
+to be shared across all gateway instances in a cluster. When certs are dedicated to specific machines the gateway identity
+store will not be able to be blindly replicated as hostname verification problems will ensue. Obviously, truststores will
+need to be taken into account as well.
+
+
+------------------------------------------------------------------------------
+Mapping Gateway URLs to Hadoop cluster URLs
+------------------------------------------------------------------------------
+The Gateway functions much like a reverse proxy.  As such it maintains a
+mapping of URLs that are exposed externally by the Gateway to URLs that are
+provided by the Hadoop cluster.  Examples of mappings for the NameNode and
+Templeton are shown below.  These mapping are generated from the combination
+of the Gateway configuration file (i.e. {GATEWAY_HOME}/gateway-site.xml)
+and the cluster topology descriptors
+(e.g. {GATEWAY_HOME}/deployments/<cluster-name>.xml).
+
+  HDFS (NameNode)
+    Gateway: http://<gateway-host>:<gateway-port>/<gateway-path>/<cluster-name>/namenode/api/v1
+    Cluster: http://<namenode-host>:50070/webhdfs/v1
+  WebHCat (Templeton)
+    Gateway: http://<gateway-host>:<gateway-port>/<gateway-path>/<cluster-name>/templeton/api/v1
+    Cluster: http://<templeton-host>:50111/templeton/v1
+  Oozie
+    Gateway: http://<gateway-host>:<gateway-port>/<gateway-path>/<cluster-name>/oozie/api/v1
+    Cluster: http://<templeton-host>:11000/oozie/v1
+
+The values for <gateway-host>, <gateway-port>, <gateway-path> are provided via
+the Gateway configuration file (i.e. {GATEWAY_HOME}/gateway-site.xml).
+
+The value for <cluster-name> is derived from the name of the cluster topology
+descriptor (e.g. {GATEWAY_HOME}/deployments/<cluster-name>.xml).
+
+The value for <namenode-host> and <templeton-host> is provided via the cluster
+topology descriptor (e.g. {GATEWAY_HOME}/deployments/<cluster-name>.xml).
+
+Note: The ports 50070, 50111 and 11000 are the defaults for NameNode,
+      Templeton and Oozie respectively. Their values can also be provided via
+      the cluster topology descriptor if your Hadoop cluster uses different
+      ports.
+
+------------------------------------------------------------------------------
+Usage Examples
+------------------------------------------------------------------------------
+Please see the Apache Knox Gateway website for detailed examples.
+http://knox.incubator.apache.org/examples.html
+
+------------------------------------------------------------------------------
+Enabling logging
+------------------------------------------------------------------------------
+If necessary you can enable additional logging by editing the log4j.properties
+file in the conf directory.  Changing the rootLogger value from ERROR to DEBUG
+will generate a large amount of debug logging.  A number of useful, more fine
+loggers are also provided in the file.
+

http://git-wip-us.apache.org/repos/asf/incubator-knox/blob/f162485a/gateway-release/ISSUES
----------------------------------------------------------------------
diff --git a/gateway-release/ISSUES b/gateway-release/ISSUES
new file mode 100644
index 0000000..290956e
--- /dev/null
+++ b/gateway-release/ISSUES
@@ -0,0 +1,11 @@
+------------------------------------------------------------------------------
+Know Issues
+------------------------------------------------------------------------------
+The Gateway cannot be be used against either EC2 clusters or Hadoop Sandbox
+VMs unless the gateway is deployed within the EC2 cluster or the on the
+Sandbox VM.
+
+If the cluster deployment descriptors in {GATEWAY_HOME}/deployments are
+incorrect, the errors logged by the gateway are overly detailed and not
+diagnostic enough.
+

http://git-wip-us.apache.org/repos/asf/incubator-knox/blob/f162485a/gateway-release/README
----------------------------------------------------------------------
diff --git a/gateway-release/README b/gateway-release/README
index 7eb06a5..cb0a37c 100644
--- a/gateway-release/README
+++ b/gateway-release/README
@@ -1,5 +1,5 @@
 ------------------------------------------------------------------------------
-README file for Hadoop Gateway v0.2.0
+README file for Hadoop Gateway
 ------------------------------------------------------------------------------
 This distribution includes cryptographic software.  The country in 
 which you currently reside may have restrictions on the import, 
@@ -49,400 +49,37 @@ authentication, identity assertion, API aggregation and eventually management
 capabilities.
 
 ------------------------------------------------------------------------------
-Changes v0.1.0 - v0.2.0
+Changes
 ------------------------------------------------------------------------------
-HTTPS Support (Client side)
-Oozie Support
-Protected DataNode URL query strings
-Pluggable Identity Asserters
-Principal Mapping
-URL Rewriting Enhancements
+Please see the CHANGES file.
 
 ------------------------------------------------------------------------------
-Requirements
+Known Issues
 ------------------------------------------------------------------------------
-Java: 
-  Java 1.6 or later
+Please see the ISSUES file.
 
-Hadoop Cluster:
-  A local installation of a Hadoop Cluster is required at this time.  Hadoop 
-  EC2 cluster and/or Sandbox installations are currently difficult to access 
-  remotely via the Gateway. The EC2 and Sandbox limitation is caused by
-  Hadoop services running with internal IP addresses.  For the Gateway to work
-  in these cases it will need to be deployed on the EC2 cluster or Sandbox, at
-  this time.
-  
-  The instructions that follow assume that the Gateway is *not* collocated
-  with the Hadoop clusters themselves and (most importantly) that the
-  hostnames and IP addresses of the cluster services are accessible by the
-  gateway where ever it happens to be running.
-
-  The Hadoop cluster should be ensured to have WebHDFS, WebHCat
-  (i.e. Templeton) and Oozie configured, deployed and running.
-
-------------------------------------------------------------------------------
-Know Issues
-------------------------------------------------------------------------------
-The Gateway cannot be be used against either EC2 clusters or Hadoop Sandbox
-VMs unless the gateway is deployed within the EC2 cluster or the on the
-Sandbox VM.
-
-If the cluster deployment descriptors in {GATEWAY_HOME}/deployments are
-incorrect, the errors logged by the gateway are overly detailed and not
-diagnostic enough.
-
-------------------------------------------------------------------------------
-Installation and Deployment Instructions
-------------------------------------------------------------------------------
-1. Install
-     Download and extract the gateway-0.2.0-SNAPSHOT.zip file into the
-     installation directory that will contain your GATEWAY_HOME
-       jar xf gateway-0.2.0-SNAPSHOT.zip
-     This will create a directory 'gateway' in your current directory.
-
-2. Enter Gateway Home directory
-     cd gateway
-   The fully qualified name of this directory will be referenced as
-   {GATEWAY_HOME} throughout the remainder of this document.
-
-3. Start the demo LDAP server (ApacheDS)
-   a. First, understand that the LDAP server provided here is for demonstration
-      purposes. You may configure the LDAP specifics within the topology
-      descriptor for the cluster as described in step 5 below, in order to
-      customize what LDAP instance to use. The assumption is that most users
-      will leverage the demo LDAP server while evaluating this release and
-      should therefore continue with the instructions here in step 3.
-   b. Edit {GATEWAY_HOME}/conf/users.ldif if required and add your users and
-      groups to the file.  A number of normal Hadoop users
-      (e.g. hdfs, mapred, hcat, hive) have already been included.  Note that
-      the passwords in this file are "fictitious" and have nothing to do with
-      the actual accounts on the Hadoop cluster you are using.  There is also
-      a copy of this file in the templates directory that you can use to start
-      over if necessary.
-   c. Start the LDAP server - pointing it to the config dir where it will find
-      the users.ldif file in the conf directory.
-        java -jar bin/ldap-0.2.0-SNAPSHOT.jar conf &
-      There are a number of log messages of the form "Created null." that can
-      safely be ignored.  Take note of the port on which it was started as this
-      needs to match later configuration.  This will create a directory named
-      'org.apache.hadoop.gateway.security.EmbeddedApacheDirectoryServer' that
-      can safely be ignored.
-
-4. Start the Gateway server
-     java -jar bin/gateway-server-0.2.0-SNAPSHOT.jar
-   a. Take note of the port identified in the logging output as you will need this for 
-      accessing the gateway.
-   b. The server will prompt you for the master secret (password). This secret is used 
-      to secure artifacts used to secure artifacts used by the gateway server for 
-      things like SSL, credential/password aliasing. This secret will have to be entered 
-      at startup unless you choose to persist it. Remember this secret and keep it safe. 
-      It represents the keys to the kingdom. See the Persisting the Master section for 
-      more information.
-
-5. Configure the Gateway with the topology of your Hadoop cluster
-   a. Edit the file {GATEWAY_HOME}/deployments/sample.xml
-   b. Change the host and port in the urls of the <service> elements for
-      NAMENODE, TEMPLETON and OOZIE services to match your Hadoop cluster
-      deployment.
-   c. The default configuration contains the LDAP URL for a LDAP server.  By
-      default that file is configured to access the demo ApacheDS based LDAP
-      server and its default configuration. By default, this server listens on
-      port 33389.  Optionally, you can change the LDAP URL for the LDAP server
-      to be used for authentication.  This is set via the
-      main.ldapRealm.contextFactory.url property in the
-      <gateway><provider><authentication> section.
-   d. Save the file.  The directory {GATEWAY_HOME}/deployments is monitored
-      by the Gateway server and reacts to the discovery of a new or changed
-      cluster topology descriptor by provisioning the endpoints and required
-      filter chains to serve the needs of each cluster as described by the
-      topology file.  Note that the name of the file excluding the extension
-      is also used as the path for that cluster in the URL.  So for example
-      the sample.xml file will result in Gateway URLs of the form
-        http://{gateway-host}:{gateway-port}/gateway/sample/namenode/api/v1
-
-6. Test the installation and configuration of your Gateway
-   Invoke the LISTSATUS operation on HDFS represented by your configured
-   NAMENODE by using your web browser or curl:
-
-   curl -i -k -u hdfs:hdfs-password -X GET \
-     'https://localhost:8443/gateway/sample/namenode/api/v1/?op=LISTSTATUS'
-
-   The results of the above command should result in something to along the
-   lines of the output below.  The exact information returned is subject to
-   the content within HDFS in your Hadoop cluster.
-
-     HTTP/1.1 200 OK
-       Content-Type: application/json
-       Content-Length: 760
-       Server: Jetty(6.1.26)
-
-     {"FileStatuses":{"FileStatus":[
-     {"accessTime":0,"blockSize":0,"group":"hdfs","length":0,"modificationTime":1350595859762,"owner":"hdfs","pathSuffix":"apps","permission":"755","replication":0,"type":"DIRECTORY"},
-     {"accessTime":0,"blockSize":0,"group":"mapred","length":0,"modificationTime":1350595874024,"owner":"mapred","pathSuffix":"mapred","permission":"755","replication":0,"type":"DIRECTORY"},
-     {"accessTime":0,"blockSize":0,"group":"hdfs","length":0,"modificationTime":1350596040075,"owner":"hdfs","pathSuffix":"tmp","permission":"777","replication":0,"type":"DIRECTORY"},
-     {"accessTime":0,"blockSize":0,"group":"hdfs","length":0,"modificationTime":1350595857178,"owner":"hdfs","pathSuffix":"user","permission":"755","replication":0,"type":"DIRECTORY"}
-     ]}}
-
-   For additional information on WebHDFS, Templeton/WebHCat and Oozie
-   REST APIs, see the following URLs respectively:
-
-   http://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-hdfs/WebHDFS.html
-   http://people.apache.org/~thejas/templeton_doc_v1/
-   http://oozie.apache.org/docs/3.3.1/WebServicesAPI.html
-   
 ------------------------------------------------------------------------------
-Persisting the Master
+Installation
 ------------------------------------------------------------------------------
-The master secret is required to start the server. This secret is used to access secured artifacts by the gateway 
-instance. Keystore, trust stores and credential stores are all protected with the master secret.
-
-You may persist the master secret by supplying the *-persist-master* switch at startup. This will result in a
-warning indicating that persisting the secret is less secure than providing it at startup. We do make some provisions in
-order to protect the persisted password. 
-
-It is encrypted with AES 128 bit encryption and where possible the file permissions are set to only be accessable by 
-the user that the gateway is running as. 
-
-After persisting the secret, ensure that the file at config/security/master has the appropriate permissions set for your 
-environment. This is probably the most important layer of defense for master secret. Do not assume that the encryption if
-sufficient protection.
-
-A specific user should be created to run the gateway this will protect a persisted master file.
+Please see the INSTALL file or the Apache Knox Gateway website.
+http://knox.incubator.apache.org/getting-started.html
 
 ------------------------------------------------------------------------------
-Management of Security Artifacts
+Examples
 ------------------------------------------------------------------------------
-There are a number of artifacts that are used by the gateway in ensuring the security of wire level communications, 
-access to protected resources and the encryption of sensitive data. These artifacts can be managed from outside of
-the gateway instances or generated and populated by the gateway instance itself.
-
-The following is a description of how this is coordinated with both standalone (development, demo, etc) gateway 
-instances and instances as part of a cluster of gateways in mind.
-
-Upon start of the gateway server we:
-
-1. Look for an identity store at conf/security/keystores/gateway.jks. The identity store contains the certificate 
-   and private key used to represent the identity of the server for SSL connections and signtature creation.
-	a. If there is no identity store we create one and generate a self-signed certificate for use in standalone/demo 
-   	   mode. The certificate is stored with an alias of gateway-identity.
-   	b. If there is an identity store found than we ensure that it can be loaded using the provided master secret and
-   	   that there is an alias with called gateway-identity.
-2. Look for a credential store at conf/security/keystores/__gateway-credentials.jceks. This credential store is used
-   to store secrets/passwords that are used by the gateway. For instance, this is where the passphrase for accessing
-   the gateway-identity certificate is kept.
-   a. If there is no credential store found then we create one and populate it with a generated passphrase for the alias
-      gateway-identity-passphrase. This is coordinated with the population of the self-signed cert into the identity-store.
-   b. If a credential store is found then we ensure that it can be loaded using the provided master secret and that the 
-      expected aliases have been populated with secrets.
-      
-Upon deployment of a Hadoop cluster topology within the gateway we:
-
-1. Look for a credential store for the topology. For instance, we have a sample topology that gets deployed out of the box.
-   We look for conf/security/keystores/sample-credentials.jceks. This topology specific credential store is used for storing
-   secrets/passwords that are used for encrypting sensitive data with topology specific keys.
-   a. If no credential store is found for the topology being deployed then one is created for it. Population of the aliases
-      is delegated to the configured providers within the system that will require the use of a secret for a particular
-      task. They may programmatically set the value of the secret or choose to have the value for the specified alias
-      generated through the AliasService..
-   b. If a credential store is found then we ensure that it can be loaded with the provided master secret and the confgured
-      providers have the opportunity to ensure that the aliases are populated and if not to populate them.
- 
- By leveraging the algorithm described above we can provide a window of opportunity for management of these artifacts in a
- number of ways.
- 
- 1. Using a single gateway instance as a master instance the artifacts can be generated or placed into the expected location
-    and then replicated across all of the slave instances before startup.
- 2. Using an NFS mount as a central location for the artifacts would provide a single source of truth without the need to 
-    replicate them over the network. Of course, NFS mounts have their own challenges.
-
-Summary of Secrets to be Managed:
-
-1. Master secret - the same for all gateway instances in a cluster of gateways
-2. All security related artifacts are protected with the master secret
-3. Secrets used by the gateway itself are stored within the gateway credential store and are the same across all gateway 
-   instances in the cluster of gateways
-4. Secrets used by providers within cluster topologies are stored in topology specific credential stores and are the same 
-   for the same topology across the cluster of gateway instances. However, they are specific to the topology - so secrets 
-   for one hadoop cluster are different from those of another. This allows for failover from one gateway instance to another 
-   even when encryption is being used while not allowing the compromise of one encryption key to expose the data for all clusters.
-
-NOTE: the SSL certificate will need special consideration depending on the type of certificate. Wildcard certs may be able 
-to be shared across all gateway instances in a cluster. When certs are dedicated to specific machines the gateway identity 
-store will not be able to be blindly replicated as hostname verification problems will ensue. Obviously, truststores will 
-need to be taken into account as well.
-
-
-------------------------------------------------------------------------------
-Mapping Gateway URLs to Hadoop cluster URLs
-------------------------------------------------------------------------------
-The Gateway functions much like a reverse proxy.  As such it maintains a
-mapping of URLs that are exposed externally by the Gateway to URLs that are
-provided by the Hadoop cluster.  Examples of mappings for the NameNode and
-Templeton are shown below.  These mapping are generated from the combination
-of the Gateway configuration file (i.e. {GATEWAY_HOME}/gateway-site.xml)
-and the cluster topology descriptors
-(e.g. {GATEWAY_HOME}/deployments/<cluster-name>.xml).
-
-  HDFS (NameNode)
-    Gateway: http://<gateway-host>:<gateway-port>/<gateway-path>/<cluster-name>/namenode/api/v1
-    Cluster: http://<namenode-host>:50070/webhdfs/v1
-  WebHCat (Templeton)
-    Gateway: http://<gateway-host>:<gateway-port>/<gateway-path>/<cluster-name>/templeton/api/v1
-    Cluster: http://<templeton-host>:50111/templeton/v1
-  Oozie
-    Gateway: http://<gateway-host>:<gateway-port>/<gateway-path>/<cluster-name>/oozie/api/v1
-    Cluster: http://<templeton-host>:11000/oozie/v1
-
-The values for <gateway-host>, <gateway-port>, <gateway-path> are provided via
-the Gateway configuration file (i.e. {GATEWAY_HOME}/gateway-site.xml).
-
-The value for <cluster-name> is derived from the name of the cluster topology
-descriptor (e.g. {GATEWAY_HOME}/deployments/<cluster-name>.xml).
-
-The value for <namenode-host> and <templeton-host> is provided via the cluster
-topology descriptor (e.g. {GATEWAY_HOME}/deployments/<cluster-name>.xml).
-
-Note: The ports 50070, 50111 and 11000 are the defaults for NameNode,
-      Templeton and Oozie respectively. Their values can also be provided via
-      the cluster topology descriptor if your Hadoop cluster uses different
-      ports.
-
-------------------------------------------------------------------------------
-Enabling logging
-------------------------------------------------------------------------------
-If necessary you can enable additional logging by editing the log4j.properties
-file in the conf directory.  Changing the rootLogger value from ERROR to DEBUG
-will generate a large amount of debug logging.  A number of useful, more fine
-loggers are also provided in the file.
+Please see the Apache Knox Gateway website for detailed examples.
+http://knox.incubator.apache.org/examples.html
 
 ------------------------------------------------------------------------------
 Filing bugs
 ------------------------------------------------------------------------------
-File bugs at hortonworks.jira.com under Project "Hadoop Gateway Development"
-Include the results of
-  java -jar bin/gateway-0.2.0-SNAPSHOT.jar -version
-in the Environment section.  Also include the version of Hadoop being used.
-
-------------------------------------------------------------------------------
-Example #1: WebHDFS & Templeton/WebHCat
-------------------------------------------------------------------------------
-The example below illustrates the sequence of curl commands that could be used
-to run a "word count" map reduce job.  It utilizes the hadoop-examples.jar
-from a Hadoop install for running a simple word count job.  Take care to
-follow the instructions below for steps 4/5 and 6/7 where the Location header
-returned by the call to the NameNode is copied for use with the call to the
-DataNode that follows it.
-
-# 0. Optionally cleanup the test directory in case a previous example was run without cleaning up.
-curl -i -k -u mapred:mapred-password -X DELETE \
-  'https://localhost:8443/gateway/sample/namenode/api/v1/tmp/test?op=DELETE&recursive=true'
-
-# 1. Create a test input directory /tmp/test/input
-curl -i -k -u mapred:mapred-password -X PUT \
-  'https://localhost:8443/gateway/sample/namenode/api/v1/tmp/test/input?op=MKDIRS'
-
-# 2. Create a test output directory /tmp/test/input
-curl -i -k -u mapred:mapred-password -X PUT \
-  'https://localhost:8443/gateway/sample/namenode/api/v1/tmp/test/output?op=MKDIRS'
-
-# 3. Create the inode for hadoop-examples.jar in /tmp/test
-curl -i -k -u mapred:mapred-password -X PUT \
-  'https://localhost:8443/gateway/sample/namenode/api/v1/tmp/test/hadoop-examples.jar?op=CREATE'
-
-# 4. Upload hadoop-examples.jar to /tmp/test.  Use a hadoop-examples.jar from a Hadoop install.
-curl -i -k -u mapred:mapred-password -T hadoop-examples.jar -X PUT '{Value Location header from command above}'
-
-# 5. Create the inode for a sample file README in /tmp/test/input
-curl -i -k -u mapred:mapred-password -X PUT \
-  'https://localhost:8443/gateway/sample/namenode/api/v1/tmp/test/input/README?op=CREATE'
-
-# 6. Upload readme.txt to /tmp/test/input.  Use the readme.txt in {GATEWAY_HOME}.
-curl -i -k -u mapred:mapred-password -T README -X PUT '{Value of Location header from command above}'
-
-# 7. Submit the word count job via WebHCat/Templeton.
-# Take note of the Job ID in the JSON response as this will be used in the next step.
-curl -v -i -k -u mapred:mapred-password -X POST \
-  -d jar=/tmp/test/hadoop-examples.jar -d class=wordcount \
-  -d arg=/tmp/test/input -d arg=/tmp/test/output \
-  'https://localhost:8443/gateway/sample/templeton/api/v1/mapreduce/jar'
+Current the Jira infrastructure for the Apache Knox project is not setup.
+If you have access, file bugs at hortonworks.jira.com under
+  Project: Bug DB
+  Component: Knox
 
-# 8. Look at the status of the job
-curl -i -k -u mapred:mapred-password -X GET \
-  'https://localhost:8443/gateway/sample/templeton/api/v1/queue/{Job ID returned in JSON body from previous step}'
+Include the results of this command
 
-# 9. Look at the status of the job queue
-curl -i -k -u mapred:mapred-password -X GET \
-  'https://localhost:8443/gateway/sample/templeton/api/v1/queue'
-
-# 10. List the contents of the output directory /tmp/test/output
-curl -i -k -u mapred:mapred-password -X GET \
-  'https://localhost:8443/gateway/sample/namenode/api/v1/tmp/test/output?op=LISTSTATUS'
-
-# 11. Optionally cleanup the test directory
-curl -i -k -u mapred:mapred-password -X DELETE \
-  'https://localhost:8443/gateway/sample/namenode/api/v1/tmp/test?op=DELETE&recursive=true'
-
-------------------------------------------------------------------------------
-Example #2: WebHDFS & Oozie
-------------------------------------------------------------------------------
-The example below illustrates the sequence of curl commands that could be used
-to run a "word count" map reduce job via an Oozie workflow.  It utilizes the
-hadoop-examples.jar from a Hadoop install for running a simple word count job.
-Take care to follow the instructions below where replacement values are
-required.  These replacement values are identivied with { } markup.
-
-# 0. Optionally cleanup the test directory in case a previous example was run without cleaning up.
-curl -i -k -u mapred:mapred-password -X DELETE \
-  'https://localhost:8443/gateway/sample/namenode/api/v1/tmp/test?op=DELETE&recursive=true'
-
-# 1. Create the inode for workflow definition file in /tmp/test
-curl -i -k -u mapred:mapred-password -X PUT \
-  'https://localhost:8443/gateway/sample/namenode/api/v1/tmp/test/workflow.xml?op=CREATE'
-
-# 2. Upload the workflow definition file.  This file can be found in {GATEWAY_HOME}/templates
-curl -i -k -u mapred:mapred-password -T templates/workflow-definition.xml -X PUT \
-  '{Value Location header from command above}'
-
-# 3. Create the inode for hadoop-examples.jar in /tmp/test/lib
-curl -i -k -u mapred:mapred-password -X PUT \
-  'https://localhost:8443/gateway/sample/namenode/api/v1/tmp/test/lib/hadoop-examples.jar?op=CREATE'
-
-# 4. Upload hadoop-examples.jar to /tmp/test/lib.  Use a hadoop-examples.jar from a Hadoop install.
-curl -i -k -u mapred:mapred-password -T hadoop-examples.jar -X PUT \
-  '{Value Location header from command above}'
-
-# 5. Create the inode for a sample input file readme.txt in /tmp/test/input.
-curl -i -k -u mapred:mapred-password -X PUT \
-  'https://localhost:8443/gateway/sample/namenode/api/v1/tmp/test/input/README?op=CREATE'
-
-# 6. Upload readme.txt to /tmp/test/input.  Use the readme.txt in {GATEWAY_HOME}.
-# The sample below uses this README file found in {GATEWAY_HOME}.
-curl -i -k -u mapred:mapred-password -T README -X PUT \
-  '{Value of Location header from command above}'
-
-# 7. Create the job configuration file by replacing the {NameNode host:port} and {JobTracker host:port}
-# in the command below to values that match your Hadoop configuration.
-# NOTE: The hostnames must be resolvable by the Oozie daemon.  The ports are the RPC ports not the HTTP ports.
-# For example {NameNode host:port} might be sandbox:8020 and {JobTracker host:port} sandbox:50300
-# The source workflow-configuration.xml file can be found in {GATEWAY_HOME}/templates
-# Alternatively, this file can copied and edited manually for environments without the sed utility.
-sed -e s/REPLACE.NAMENODE.RPCHOSTPORT/{NameNode host:port}/ \
-  -e s/REPLACE.JOBTRACKER.RPCHOSTPORT/{JobTracker host:port}/ \
-  <templates/workflow-configuration.xml >workflow-configuration.xml
-
-# 8. Submit the job via Oozie
-# Take note of the Job ID in the JSON response as this will be used in the next step.
-curl -i -k -u mapred:mapred-password -T workflow-configuration.xml -H Content-Type:application/xml -X POST \
-  'https://localhost:8443/gateway/oozie/sample/api/v1/jobs?action=start'
-
-# 9. Query the job status via Oozie.
-curl -i -k -u mapred:mapred-password -X GET \
-  'https://localhost:8443/gateway/sample/oozie/api/v1/job/{Job ID returned in JSON body from previous step}'
-
-# 10. List the contents of the output directory /tmp/test/output
-curl -i -k -u mapred:mapred-password -X GET \
-  'https://localhost:8443/gateway/sample/namenode/api/v1/tmp/test/output?op=LISTSTATUS'
+  java -jar bin/gateway-0.2.0-SNAPSHOT.jar -version
 
-# 11. Optionally cleanup the test directory
-curl -i -k -u mapred:mapred-password -X DELETE \
-  'https://localhost:8443/gateway/sample/namenode/api/v1/tmp/test?op=DELETE&recursive=true'
\ No newline at end of file
+in the Environment section.  Also include the version of Hadoop being used.
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/incubator-knox/blob/f162485a/gateway-release/readme.md
----------------------------------------------------------------------
diff --git a/gateway-release/readme.md b/gateway-release/readme.md
deleted file mode 100644
index 5532768..0000000
--- a/gateway-release/readme.md
+++ /dev/null
@@ -1,355 +0,0 @@
-<!---
-Licensed to the Apache Software Foundation (ASF) under one
-or more contributor license agreements.  See the NOTICE file
-distributed with this work for additional information
-regarding copyright ownership.  The ASF licenses this file
-to you under the Apache License, Version 2.0 (the
-"License"); you may not use this file except in compliance
-with the License.  You may obtain a copy of the License at
-
-    http://www.apache.org/licenses/LICENSE-2.0
-
-Unless required by applicable law or agreed to in writing, software
-distributed under the License is distributed on an "AS IS" BASIS,
-WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-See the License for the specific language governing permissions and
-limitations under the License.
--->
-
-------------------------------------------------------------------------------
-#README file for Hadoop Gateway v0.2.0
-------------------------------------------------------------------------------
-##Notice
----
-
-This distribution includes cryptographic software.  The country in 
-which you currently reside may have restrictions on the import, 
-possession, use, and/or re-export to another country, of 
-encryption software.  BEFORE using any encryption software, please 
-check your country's laws, regulations and policies concerning the
-import, possession, or use, and re-export of encryption software, to 
-see if this is permitted.  See <http://www.wassenaar.org/> for more
-information.
-
-The U.S. Government Department of Commerce, Bureau of Industry and
-Security (BIS), has classified this software as Export Commodity 
-Control Number (ECCN) 5D002.C.1, which includes information security
-software using or performing cryptographic functions with asymmetric
-algorithms.  The form and manner of this Apache Software Foundation
-distribution makes it eligible for export under the License Exception
-ENC Technology Software Unrestricted (TSU) exception (see the BIS 
-Export Administration Regulations, Section 740.13) for both object 
-code and source code.
-
-The following provides more details on the included cryptographic
-software:
-  
-This package includes the use of ApacheDS which is dependent upon the 
-Bouncy Castle Crypto APIs written by the Legion of the Bouncy Castle
-http://www.bouncycastle.org/ feedback-crypto@bouncycastle.org.
-
-------------------------------------------------------------------------------
-##Description
-------------------------------------------------------------------------------
-The charter for the Gateway project is to simplify and normalize the deployment
-and implementation of secure Hadoop clusters as well as be a centralize access point
-for the service specific REST APIs exposed from within the cluster.
-
-Milestone-1 of this project intends to demonstrate the ability to dynamically
-provision reverse proxy capabilities with filter chains that meet the cluster
-specific needs for authentication.
-
-BASIC authentication with identity being asserted to the rest of the cluster 
-via Pseudo/Simple authentication will be demonstrated for security.
-
-For API aggregation, the Gateway will provide a central endpoint for HDFS and
-Templeton APIs for each cluster.
-
-Future Milestone releases will extend these capabilities with additional
-authentication, identity assertion, API aggregation and eventually management
-capabilities.
-
-------------------------------------------------------------------------------
-##Requirements
-------------------------------------------------------------------------------
-Java: 
-  Java 1.6 or later
-
-Hadoop Cluster:
-  A local installation of a Hadoop Cluster is required at this time.  Hadoop 
-  EC2 cluster and/or Sandbox installations are currently difficult to access 
-  remotely via the Gateway. The EC2 and Sandbox limitation is caused by Hadoop
-  services running with internal IP addresses.  For the Gateway to work in these 
-  cases it will need to be deployed on the EC2 cluster or Sandbox, at this time.  
-  
-  The instructions that follow assume that the Gateway is *not* colocated with
-  the Hadoop clusters themselves and (most importantly) that the IP addresses 
-  of the cluster services are accessible by the gateway where ever it happens to
-  be running.
-
-  The Hadoop cluster should be ensured to have WebHDFS and WebHCat (i.e. Templeton) 
-  deployed and configured.
-
-------------------------------------------------------------------------------
-##Known Issues
-------------------------------------------------------------------------------
-Currently there is an issue with submitting Java MapReduce jobs via the WebHCat 
-REST APIs.  Therefore step 7 in the Example section currently fails.
-
-The Gateway cannot be be used against either an EC2 cluster or Hadoop Sandbox 
-unless the gateway is deployed in the EC2 cluster or the on the Sandbox VM.
-
-Currently when any of the files in {GATEWAY_HOME}/deployments is changed, all
-deployed cluster topologies will be reloaded.  Therefore you may see
-unexpected message of the form "Loading topology file:"
-
-If the cluster deployment descriptors in {GATEWAY_HOME}/deployments are
-incorrect the errors logged by the gateway are overly detailed and not
-diagnostic enough.
-
-------------------------------------------------------------------------------
-##Installation and Deployment Instructions
-------------------------------------------------------------------------------
-
-1. Install
-     Download and extract the gateway-0.2.0-SNAPSHOT.zip file into the installation directory that will contain your
-     GATEWAY_HOME
-       jar xf gateway-0.2.0-SNAPSHOT.zip
-     This will create a directory 'gateway' in your current directory.
-
-2. Enter Gateway Home directory
-     cd gateway
-   The fully qualified name of this directory will be referenced as {GATEWAY_HOME} throughout the remainder of this
-   document.
-
-3. Start the demo LDAP server (ApacheDS)
-   a. First, understand that the LDAP server provided here is for demonstration purposes. You may configure the
-      LDAP specifics within the topology descriptor for the cluster as described in step 5 below, in order to
-      customize what LDAP instance to use. The assumption is that most users will leverage the demo LDAP server 
-      while evaluating this release and should therefore continue with the instructions here in step 3.
-   b. Edit {GATEWAY_HOME}/conf/users.ldif if required and add your users and groups to the file.
-      A number of normal Hadoop users (e.g. hdfs, mapred, hcat, hive) have already been included.  Note that
-      the passwords in this file are "fictitious" and have nothing to do with the actual accounts on the Hadoop
-      cluster you are using.  There is also a copy of this file in the templates directory that you can use to
-      start over if necessary.
-   c. Start the LDAP server - pointing it to the config dir where it will find the users.ldif file in the conf
-      directory.
-        java -jar bin/gateway-test-ldap-0.2.0-SNAPSHOT.jar conf &
-      There are a number of messages of the form "Created null." that can safely be ignored.
-      Take note of the port on which it was started as this needs to match later configuration.
-      This will create a directory named 'org.apache.hadoop.gateway.security.EmbeddedApacheDirectoryServer' that
-      can safely be ignored.
-
-4. Start the Gateway server
-     java -jar bin/gateway-server-0.2.0-SNAPSHOT.jar
-   a. Take note of the port identified in the logging output as you will need this for accessing the gateway.
-   b. The server will prompt you for the master secret (password). This secret is used to secure artifacts used 
-      to secure artifacts used by the gateway server for things like SSL, credential/password aliasing. This secret
-      will have to be entered at startup unless you choose to persist it. Remember this secret and keep it safe. It
-      represents the keys to the kingdom. See the Persisting the Master section for more information.
-
-5. Configure the Gateway with the topology of your Hadoop cluster
-   a. Edit the file {GATEWAY_HOME}/deployments/sample.xml
-   b. Change the host and port in the urls of the <service> elements for NAMENODE and TEMPLETON service to match your
-      cluster deployment.
-   c. The default configuration contains the LDAP URL for a LDAP server.  By default that file is configured to access 
-      the demo ApacheDS based LDAP server and its default configuration. By default, this server listens on port 33389.
-      Optionally, you can change the LDAP URL for the LDAP server to be used for authentication.  This is set via
-      the main.ldapRealm.contextFactory.url property in the <gateway><provider><authentication> section.
-   d. Save the file.  The directory {GATEWAY_HOME}/deployments is monitored by the Gateway server and reacts to the
-      discovery of a new or changed cluster topology descriptor by provisioning the endpoints and required filter
-      chains to serve the needs of each cluster as described by the topology file.  Note that the name of the file
-      excluding the extension is also used as the path for that cluster in the URL.  So for example the sample.xml
-      file will result in Gateway URLs of the form
-        http://{gateway-host}:{gateway-port}/gateway/sample/namenode/api/v1
-
-6. Test the installation and configuration of your Gateway
-   Invoke the LISTSATUS operation on HDFS represented by your configured NAMENODE by using your web browser or curl:
-
->      curl --user hdfs:hdfs-password -i -L http://localhost:8888/gateway/sample/namenode/api/v1/tmp?op=LISTSTATUS
-
-   The results of the above command should result in something along the lines of the output below.  The exact
-   information returned is subject to the content within HDFS in your Hadoop cluster.
-
->      HTTP/1.1 200 OK
->        Content-Type: application/json
->        Content-Length: 760
->        Server: Jetty(6.1.26)
-> 
->      {"FileStatuses":{"FileStatus":[
->      {"accessTime":0,"blockSize":0,"group":"hdfs","length":0,"modificationTime":1350595859762,"owner":"hdfs","pathSuffix":"apps","permission":"755","replication":0,"type":"DIRECTORY"},
->      {"accessTime":0,"blockSize":0,"group":"mapred","length":0,"modificationTime":1350595874024,"owner":"mapred","pathSuffix":"mapred","permission":"755","replication":0,"type":"DIRECTORY"},
->      {"accessTime":0,"blockSize":0,"group":"hdfs","length":0,"modificationTime":1350596040075,"owner":"hdfs","pathSuffix":"tmp","permission":"777","replication":0,"type":"DIRECTORY"},
->      {"accessTime":0,"blockSize":0,"group":"hdfs","length":0,"modificationTime":1350595857178,"owner":"hdfs","pathSuffix":"user","permission":"755","replication":0,"type":"DIRECTORY"}
->      ]}}
-
-   For additional information on HDFS and Templeton APIs, see the following URLs respectively:
-
-   http://hadoop.apache.org/docs/r1.0.4/webhdfs.html
-     and
-   http://people.apache.org/~thejas/templeton_doc_v1/
-   
-------------------------------------------------------------------------------
-##Persisting the Master
-------------------------------------------------------------------------------
-The master secret is required to start the server. This secret is used to access secured artifacts by the gateway 
-instance. Keystore, trust stores and credential stores are all protected with the master secret.
-
-You may persist the master secret by supplying the *-persist-master* switch at startup. This will result in a
-warning indicating that persisting the secret is less secure than providing it at startup. We do make some provisions in
-order to protect the persisted password. 
-
-It is encrypted with AES 128 bit encryption and where possible the file permissions are set to only be accessable by 
-the user that the gateway is running as. 
-
-After persisting the secret, ensure that the file at config/security/master has the appropriate permissions set for your 
-environment. This is probably the most important layer of defense for master secret. Do not assume that the encryption if
-sufficient protection.
-
-A specific user should be created to run the gateway this will protect a persisted master file.
-
-------------------------------------------------------------------------------
-##Management of Security Artifacts
-------------------------------------------------------------------------------
-There are a number of artifacts that are used by the gateway in ensuring the security of wire level communications, 
-access to protected resources and the encryption of sensitive data. These artifacts can be managed from outside of
-the gateway instances or generated and populated by the gateway instance itself.
-
-The following is a description of how this is coordinated with both standalone (development, demo, etc) gateway 
-instances and instances as part of a cluster of gateways in mind.
-
-Upon start of the gateway server we:
-
-1. Look for an identity store at conf/security/keystores/gateway.jks. The identity store contains the certificate 
-   and private key used to represent the identity of the server for SSL connections and signtature creation.
-	a. If there is no identity store we create one and generate a self-signed certificate for use in standalone/demo 
-   	   mode. The certificate is stored with an alias of gateway-identity.
-   	b. If there is an identity store found than we ensure that it can be loaded using the provided master secret and
-   	   that there is an alias with called gateway-identity.
-2. Look for a credential store at conf/security/keystores/__gateway-credentials.jceks. This credential store is used
-   to store secrets/passwords that are used by the gateway. For instance, this is where the passphrase for accessing
-   the gateway-identity certificate is kept.
-   a. If there is no credential store found then we create one and populate it with a generated passphrase for the alias
-      gateway-identity-passphrase. This is coordinated with the population of the self-signed cert into the identity-store.
-   b. If a credential store is found then we ensure that it can be loaded using the provided master secret and that the 
-      expected aliases have been populated with secrets.
-      
-Upon deployment of a Hadoop cluster topology within the gateway we:
-
-1. Look for a credential store for the topology. For instance, we have a sample topology that gets deployed out of the box.
-   We look for conf/security/keystores/sample-credentials.jceks. This topology specific credential store is used for storing
-   secrets/passwords that are used for encrypting sensitive data with topology specific keys.
-   a. If no credential store is found for the topology being deployed then one is created for it. Population of the aliases
-      is delegated to the configured providers within the system that will require the use of a secret for a particular
-      task. They may programmatically set the value of the secret or choose to have the value for the specified alias
-      generated through the AliasService..
-   b. If a credential store is found then we ensure that it can be loaded with the provided master secret and the confgured
-      providers have the opportunity to ensure that the aliases are populated and if not to populate them.
- 
- By leveraging the algorithm described above we can provide a window of opportunity for management of these artifacts in a
- number of ways.
- 
- 1. Using a single gateway instance as a master instance the artifacts can be generated or placed into the expected location
-    and then replicated across all of the slave instances before startup.
- 2. Using an NFS mount as a central location for the artifacts would provide a single source of truth without the need to 
-    replicate them over the network. Of course, NFS mounts have their own challenges.
-
-Summary of Secrets to be Managed:
-
-1. Master secret - the same for all gateway instances in a cluster of gateways
-2. All security related artifacts are protected with the master secret
-3. Secrets used by the gateway itself are stored within the gateway credential store and are the same across all gateway instances in the cluster of gateways
-4. Secrets used by providers within cluster topologies are stored in topology specific credential stores and are the same for the same topology across the cluster of gateway instances. However, they are specific to the topology - so secrets for one hadoop cluster are different from those of another. This allows for failover from one gateway instance to another even when encryption is being used while not allowing the compromise of one encryption key to expose the data for all clusters.
-
-NOTE: the SSL certificate will need special consideration depending on the type of certificate. Wildcard certs may be able to be shared across all gateway instances in a cluster. When certs are dedicated to specific machines the gateway identity store will not be able to be blindly replicated as hostname verification problems will ensue. Obviously, truststores will need to be taken into account as well.
-
-
-------------------------------------------------------------------------------
-##Mapping Gateway URLs to Hadoop cluster URLs
-------------------------------------------------------------------------------
-The Gateway functions in much like a reverse proxy.  As such it maintains a mapping of URLs that are exposed
-externally by the Gateway to URLs that are provided by the Hadoop cluster.  Examples of mappings for the NameNode and
-Templeton are shown below.  These mapping are generated from the combination of the Gateway configuration file
-(i.e. {GATEWAY_HOME}/gateway-site.xml) and the cluster topology descriptors
-(e.g. {GATEWAY_HOME}/deployments/<cluster-name>.xml}.
-
-*   HDFS (NameNode)
-*   	Gateway: http://<gateway-host>:<gateway-port>/<gateway-path>/<cluster-name>/namenode/api/v1
-*     	Cluster: http://<namenode-host>:50070/webhdfs/v1
-*   Templeton
-*     	Gateway: http://<gateway-host>:<gateway-port>/<gateway-path>/<cluster-name>/templeton/api/v1
-*     	Cluster: http://<templeton-host>:50111/templeton/v1
-
-The values for <gateway-host>, <gateway-port>, <gateway-path> are provided via the Gateway configuration file
-(i.e. {GATEWAY_HOME}/gateway-site.xml).
-
-The value for <cluster-name> is derived from the name of the cluster topology descriptor
-(e.g. {GATEWAY_HOME}/deployments/{cluster-name>.xml).
-
-The value for <namenode-host> are provided via the cluster topology descriptor.
-
-Note: The ports 50070 and 50111 are the defaults for NameNode and Templeton respectively.
-      Their values can also be provided via the cluster topology descriptor.
-
-------------------------------------------------------------------------------
-##Enabling logging
-------------------------------------------------------------------------------
-If necessary you can enable additional logging by editing the log4j.properties file in the conf directory.
-Changing the rootLogger value from ERROR to DEBUG will generate a large amount of debug logging.  A number
-of useful, more fine loggers are also provided in the file.
-
-------------------------------------------------------------------------------
-##Filing bugs
-------------------------------------------------------------------------------
-File bugs at hortonworks.jira.com under Project "Hadoop Gateway Development"
-Include the results of
-  java -jar bin/gateway-server-0.2.0-SNAPSHOT.jar -version
-in the Environment section.  Also include the version of Hadoop that you are using there as well.
-
-------------------------------------------------------------------------------
-##Example
-------------------------------------------------------------------------------
-The example below illustrates the sequence of curl commands that could be used to run a word count job.  It utilizes
-the hadoop-examples.jar from a Hadoop install for running a simple word count job.  Take care to follow the
-instructions below for steps 4/5 and 6/7 where the Location header returned by the call to the NameNode is copied for
-use with the call to the DataNode that follows it.
-
-### 1. Create a test input directory /tmp/test/input
-> curl -i -u mapred:mapred-password -X PUT \
->   'http://localhost:8888/gateway/sample/namenode/api/v1/tmp/test/input?op=MKDIRS'
-
-### 2. Create a test output directory /tmp/test/input
-> curl -i -u mapred:mapred-password -X PUT \
->   'http://localhost:8888/gateway/sample/namenode/api/v1/tmp/test/output?op=MKDIRS'
-
-### 3. Create the inode for hadoop-examples.jar in /tmp/test
-> curl -i -u mapred:mapred-password -X PUT \
->   'http://localhost:8888/gateway/sample/namenode/api/v1/tmp/test/hadoop-examples.jar?op=CREATE'
-
-### 4. Upload hadoop-examples.jar to /tmp/test.  Use a hadoop-examples.jar from a Hadoop install.
-> curl -i -u mapred:mapred-password -T hadoop-examples.jar -X PUT '{Value Location header from command above}'
-
-### 5. Create the inode for a sample file readme.txt in /tmp/test/input
-> curl -i -u mapred:mapred-password -X PUT \
->   'http://localhost:8888/gateway/sample/namenode/api/v1/tmp/test/input/readme.txt?op=CREATE'
-
-### 6. Upload readme.txt to /tmp/test/input.  Use the readme.txt in {GATEWAY_HOME}.
-> curl -i -u mapred:mapred-password -T readme.txt -X PUT '{Value of Location header from command above}'
-
-### 7. Submit the word count job
-> curl -i -u mapred:mapred-password -X POST \
->   -d jar=/tmp/test/hadoop-examples.jar -d class=org.apache.org.apache.hadoop.examples.WordCount \
->   -d arg=/tmp/test/input -d arg=/tmp/test/output \
->   'http://localhost:8888/gateway/sample/templeton/api/v1/mapreduce/jar'
-
-### 8. Look at the status of the job queue
-> curl -i -u mapred:mapred-password -X GET \
->   'http://localhost:8888/gateway/sample/templeton/api/v1/queue'
-
-### 9. List the contents of the output directory /tmp/test/output
-> curl -i -u mapred:mapred-password -X GET \
->   'http://localhost:8888/gateway/sample/namenode/api/v1/tmp/test/output?op=LISTSTATUS'
-
-### 10. Cleanup the test directory
-> curl -i -u mapred:mapred-password -X DELETE \
->   'http://localhost:8888/gateway/sample/namenode/api/v1/tmp/test?op=DELETE&recursive=true'

http://git-wip-us.apache.org/repos/asf/incubator-knox/blob/f162485a/gateway-site/pom.xml
----------------------------------------------------------------------
diff --git a/gateway-site/pom.xml b/gateway-site/pom.xml
index e11a179..18e9934 100644
--- a/gateway-site/pom.xml
+++ b/gateway-site/pom.xml
@@ -32,6 +32,12 @@
     <description>Knox is a gateway for Hadoop clusters.</description>
     <url>http://incubator.apache.org/knox</url>
 
+    <properties>
+        <HHH>###</HHH>
+        <HHHH>####</HHHH>
+        <HHHHH>#####</HHHHH>
+    </properties>
+
     <licenses>
         <license>
             <name>The Apache Software License, Version 2.0</name>

http://git-wip-us.apache.org/repos/asf/incubator-knox/blob/f162485a/gateway-site/src/site/markdown/client.md.vm
----------------------------------------------------------------------
diff --git a/gateway-site/src/site/markdown/client.md.vm b/gateway-site/src/site/markdown/client.md.vm
index 6cf66d8..fc325d1 100644
--- a/gateway-site/src/site/markdown/client.md.vm
+++ b/gateway-site/src/site/markdown/client.md.vm
@@ -55,7 +55,7 @@ This document assumes a few things about your environment in order to simplify t
 2. The Apache Knox Gateway is installed and functional.
 3. The example commands are executed within the context of the GATEWAY_HOME current directory.
 The GATEWAY_HOME directory is the directory within the Apache Knox Gateway installation that contains the README file and the bin, conf and deployments directories.
-4. A few examples require the use of commands from a standard Groovy installation.
+4. A few examples require the use of commands from a standard Groovy installation.  These examples are optional but to try them you will need Groovy [installed][15].
 
 
 Usage
@@ -162,7 +162,7 @@ Constructs
 ----------
 In order to understand the DSL there are three primary constructs that need to be understood.
 
-### Hadoop
+${HHH} Hadoop
 This construct encapsulates the client side session state that will be shared between all command invocations.
 In particular it will simplify the management of any tokens that need to be presented with each command invocation.
 It also manages a thread pool that is used by all asynchronous commands which is why it is important to call one of the shutdown methods.
@@ -170,7 +170,7 @@ It also manages a thread pool that is used by all asynchronous commands which is
 The syntax associated with this is expected to change we expect that credentials will not need to be provided to the gateway.
 Rather it is expected that some form of access token will be used to initialize the session.
 
-### Services
+${HHH} Services
 Services are the primary extension point for adding new suites of commands.
 The built in examples are: Hdfs, Job and Workflow.
 The desire for extensibility is the reason for the slightly awkward Hdfs.ls(hadoop) syntax.
@@ -179,16 +179,16 @@ At a minimum it would result in extension commands with a different syntax from
 
 The service objects essentially function as a factory for a suite of commands.
 
-### Commands
+${HHH} Commands
 Commands provide the behavior of the DSL.
 They typically follow a Fluent interface style in order to allow for single line commands.
 There are really three parts to each command: Request, Invocation, Response
 
-#### Request
+${HHHH} Request
 The request is populated by all of the methods following the "verb" method and the "invoke" method.
 For example in Hdfs.rm(hadoop).ls(dir).now() the request is populated between the "verb" method rm() and the "invoke" method now().
 
-#### Invocation
+${HHHH} Invocation
 The invocation method controls how the request is invoked.
 Currently supported synchronous and asynchronous invocation.
 The now() method executes the request and returns the result immediately.
@@ -196,7 +196,7 @@ The later() method submits the request to be executed later and returns a future
 In addition later() invocation method can optionally be provided a closure to execute when the request is complete.
 See the Futures and Closures sections below for additional detail and examples.
 
-#### Response
+${HHHH} Response
 The response contains the results of the invocation of the request.
 In most cases the response is a thin wrapper over the HTTP response.
 In fact many commands will share a single BasicResponse type that only provides a few simple methods.
@@ -233,22 +233,22 @@ Services
 --------
 There are three basic DSL services and commands bundled with the shell.
 
-### HDFS
+${HHH} HDFS
 Provides basic HDFS commands.
 ***Using these DSL commands requires that WebHDFS be running in the Hadoop cluster.***
 
-### Jobs (Templeton/WebHCat)
+${HHH} Jobs (Templeton/WebHCat)
 Provides basic job submission and status commands.
 ***Using these DSL commands requires that Templeton/WebHCat be running in the Hadoop cluster.***
 
-### Workflow (Oozie)
+${HHH} Workflow (Oozie)
 Provides basic workflow submission and status commands.
 ***Using these DSL commands requires that Oozie be running in the Hadoop cluster.***
 
 
 HDFS Commands (WebHDFS)
 -----------------------
-### ls() - List the contents of a HDFS directory.
+${HHH} ls() - List the contents of a HDFS directory.
 * Request
     * dir (String) - The HDFS directory to list.
 * Response
@@ -256,7 +256,7 @@ HDFS Commands (WebHDFS)
 * Example
     * `Hdfs.ls(hadoop).ls().dir("/").now()`
 
-### rm() - Remove a HDFS file or directory.
+${HHH} rm() - Remove a HDFS file or directory.
 * Request
     * file (String) - The HDFS file or directory to remove.
     * recursive (Boolean) - If the file is a directory also remove any contained files and directories. Optional: default=false
@@ -265,7 +265,7 @@ HDFS Commands (WebHDFS)
 * Example
     * `Hdfs.rm(hadoop).file("/tmp/example").recursive().now()`
 
-### put() - Copy a file from the local file system to HDFS.
+${HHH} put() - Copy a file from the local file system to HDFS.
 * Request
     * text (String) - The text to copy to the remote file.
     * file (String) - The name of a local file to copy to the remote file.
@@ -275,7 +275,7 @@ HDFS Commands (WebHDFS)
 * Example
     * `Hdfs.put(hadoop).file("localFile").to("/tmp/example/remoteFile").now()`
 
-### get() - Copy a file from HDFS to the local file system.
+${HHH} get() - Copy a file from HDFS to the local file system.
 * Request
     * file (String) - The name of the local file to create from the remote file.  If this isn't specified the file content must be read from the response.
     * from (String) - The name of the remote file to copy.
@@ -284,7 +284,7 @@ HDFS Commands (WebHDFS)
 * Example
     * `Hdfs.get(hadoop).file("localFile").from("/tmp/example/remoteFile").now()`
 
-### mkdir() - Create a directory in HDFS.
+${HHH} mkdir() - Create a directory in HDFS.
 * Request
     * dir (String) - The name of the remote directory to create.
     * perm (String) - The permissions to create the remote directory with.  Optional: default="777"
@@ -296,7 +296,7 @@ HDFS Commands (WebHDFS)
 
 Job Commands (WebHCat/Templeton)
 --------------------------------
-### submitJava() - Submit a Java MapReduce job.
+${HHH} submitJava() - Submit a Java MapReduce job.
 * Request
     * jar (String) - The remote file name of the JAR containing the app to execute.
     * app (String) - The app name to execute.  This is wordcount for example not the class name.
@@ -307,7 +307,7 @@ Job Commands (WebHCat/Templeton)
 * Example
     * `Job.submitJava(hadoop).jar(remoteJarName).app(appName).input(remoteInputDir).output(remoteOutputDir).now().jobId`
 
-### submitPig() - Submit a Pig job.
+${HHH} submitPig() - Submit a Pig job.
 * Request
     * file (String) - The remote file name of the pig script.
     * arg (String) - An argument to pass to the script.
@@ -317,7 +317,7 @@ Job Commands (WebHCat/Templeton)
 * Example
     * `Job.submitPig(hadoop).file(remotePigFileName).arg("-v").statusDir(remoteStatusDir).now()`
 
-### submitHive() - Submit a Hive job.
+${HHH} submitHive() - Submit a Hive job.
 * Request
     * file (String) - The remote file name of the hive script.
     * arg (String) - An argument to pass to the script.
@@ -327,7 +327,7 @@ Job Commands (WebHCat/Templeton)
 * Example
     * `Job.submitHive(hadoop).file(remoteHiveFileName).arg("-v").statusDir(remoteStatusDir).now()`
 
-### queryQueue() - Return a list of all job IDs registered to the user.
+${HHH} queryQueue() - Return a list of all job IDs registered to the user.
 * Request
     * No request parameters.
 * Response
@@ -335,7 +335,7 @@ Job Commands (WebHCat/Templeton)
 * Example
     * `Job.queryQueue(hadoop).now().string`
 
-### queryStatus() - Check the status of a job and get related job information given its job ID.
+${HHH} queryStatus() - Check the status of a job and get related job information given its job ID.
 * Request
     * jobId (String) - The job ID to check. This is the ID received when the job was created.
 * Response
@@ -346,7 +346,7 @@ Job Commands (WebHCat/Templeton)
 
 Workflow Commands (Oozie)
 -------------------------
-### submit() - Submit a workflow job.
+${HHH} submit() - Submit a workflow job.
 * Request
     * text (String) - XML formatted workflow configuration string.
     * file (String) - A filename containing XML formatted workflow configuration.
@@ -356,7 +356,7 @@ Workflow Commands (Oozie)
 * Example
     * `Workflow.submit(hadoop).file(localFile).action("start").now()`
 
-### status() - Query the status of a workflow job.
+${HHH} status() - Query the status of a workflow job.
 * Request
     * jobId (String) - The job ID to check. This is the ID received when the job was created.
 * Response
@@ -438,7 +438,7 @@ The easiest way to add these to the shell is to compile them directory into the
 
 These source files are available in the samples directory of the distribution but these are included here for convenience.
 
-### Sample Service (Groovy)
+${HHH} Sample Service (Groovy)
     import org.apache.hadoop.gateway.shell.Hadoop
 
     class SampleService {
@@ -455,7 +455,7 @@ These source files are available in the samples directory of the distribution bu
 
     }
 
-### Sample Simple Command (Groovy)
+${HHH} Sample Simple Command (Groovy)
     import org.apache.hadoop.gateway.shell.AbstractRequest
     import org.apache.hadoop.gateway.shell.BasicResponse
     import org.apache.hadoop.gateway.shell.Hadoop
@@ -491,7 +491,7 @@ These source files are available in the samples directory of the distribution bu
 
     }
 
-### Sample Complex Command (Groovy)
+${HHH} Sample Complex Command (Groovy)
     import com.jayway.jsonpath.JsonPath
     import org.apache.hadoop.gateway.shell.AbstractRequest
     import org.apache.hadoop.gateway.shell.BasicResponse
@@ -549,7 +549,7 @@ These source files are available in the samples directory of the distribution bu
 Groovy
 ------
 The shell included in the distribution is basically an unmodified packaging of the Groovy shell.
-Therefore these command are functionally equivalent if you have Groovy installed.
+Therefore these command are functionally equivalent if you have Groovy [installed][15].
 
     java -jar bin/shell-0.2.0-SNAPSHOT.jar sample/SmokeTestJob.groovy
     groovy -cp bin/shell-0.2.0-SNAPSHOT.jar sample/SmokeTestJob.groovy
@@ -587,13 +587,14 @@ There are a variety of Groovy tools that make it very easy to work with the stan
 In Groovy the creation of XML or JSON is typically done via a "builder" and parsing done via a "slurper".
 In addition once JSON or XML is "slurped" the GPath, an XPath like feature build into Groovy can be used to access data.
 * XML
-  * Markup Builder [Overview](http://groovy.codehaus.org/Creating+XML+using+Groovy's+MarkupBuilder), [API](http://groovy.codehaus.org/api/groovy/xml/MarkupBuilder.html)
-  * XML Slurper [Overview](http://groovy.codehaus.org/Reading+XML+using+Groovy's+XmlSlurper), [API](http://groovy.codehaus.org/api/groovy/util/XmlSlurper.html)
-  * XPath [Overview](http://groovy.codehaus.org/GPath), [API]
+  * Markup Builder [Overview][5], [API][6]
+  * XML Slurper [Overview][7], [API][8]
+  * XPath [Overview][9], [API][10]
 * JSON
-  * JSON Builder [API](http://groovy.codehaus.org/gapi/groovy/json/JsonBuilder.html)
-  * JSON Slurper [API](http://groovy.codehaus.org/gapi/groovy/json/JsonSlurper.html)
-* GPath [Overview](http://groovy.codehaus.org/GPath)
+  * JSON Builder [API][11]
+  * JSON Slurper [API][12]
+  * JSON Path [API][14]
+* GPath [Overview][13]
 
 
 Disclaimer
@@ -612,4 +613,15 @@ fully endorsed by the ASF.
 [1]: http://en.wikipedia.org/wiki/Domain-specific_language
 [2]: http://groovy.codehaus.org/
 [3]: https://code.google.com/p/rest-assured/
-[4]: http://en.wikipedia.org/wiki/Fluent_interface
\ No newline at end of file
+[4]: http://en.wikipedia.org/wiki/Fluent_interface
+[5]: http://groovy.codehaus.org/Creating+XML+using+Groovy's+MarkupBuilder
+[6]: http://groovy.codehaus.org/api/groovy/xml/MarkupBuilder.html
+[7]: http://groovy.codehaus.org/Reading+XML+using+Groovy's+XmlSlurper
+[8]: http://groovy.codehaus.org/api/groovy/util/XmlSlurper.html
+[9]: http://groovy.codehaus.org/GPath
+[10]: http://docs.oracle.com/javase/1.5.0/docs/api/javax/xml/xpath/XPath.html
+[11]: http://groovy.codehaus.org/gapi/groovy/json/JsonBuilder.html
+[12]: http://groovy.codehaus.org/gapi/groovy/json/JsonSlurper.html
+[13]: http://groovy.codehaus.org/GPath
+[14]: https://code.google.com/p/json-path/
+[15]: http://groovy.codehaus.org/Installing+Groovy
\ No newline at end of file


Mime
View raw message