knox-commits mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From m...@apache.org
Subject svn commit: r1850181 [5/13] - in /knox: site/books/knox-1-3-0/ site/books/knox-1-3-0/adminui/ trunk/books/1.3.0/ trunk/books/1.3.0/dev-guide/ trunk/books/1.3.0/img/ trunk/books/1.3.0/img/adminui/
Date Wed, 02 Jan 2019 17:31:31 GMT
Added: knox/site/books/knox-1-3-0/warning.png
URL: http://svn.apache.org/viewvc/knox/site/books/knox-1-3-0/warning.png?rev=1850181&view=auto
==============================================================================
Binary file - no diff available.

Propchange: knox/site/books/knox-1-3-0/warning.png
------------------------------------------------------------------------------
    svn:mime-type = application/octet-stream

Added: knox/site/books/knox-1-3-0/workflow-configuration.xml
URL: http://svn.apache.org/viewvc/knox/site/books/knox-1-3-0/workflow-configuration.xml?rev=1850181&view=auto
==============================================================================
--- knox/site/books/knox-1-3-0/workflow-configuration.xml (added)
+++ knox/site/books/knox-1-3-0/workflow-configuration.xml Wed Jan  2 17:31:29 2019
@@ -0,0 +1,47 @@
+<?xml version="1.0" encoding="UTF-8"?>
+<!--
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+    http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing, software
+distributed under the License is distributed on an "AS IS" BASIS,
+WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+See the License for the specific language governing permissions and
+limitations under the License.
+-->
+<configuration>
+    <property>
+        <name>jobTracker</name>
+        <value>REPLACE.JOBTRACKER.RPCHOSTPORT</value>
+        <!-- Example: <value>localhost:50300</value> -->
+    </property>
+    <property>
+        <name>nameNode</name>
+        <value>hdfs://REPLACE.NAMENODE.RPCHOSTPORT</value>
+        <!-- Example: <value>hdfs://localhost:8020</value> -->
+    </property>
+    <property>
+        <name>oozie.wf.application.path</name>
+        <value>hdfs://REPLACE.NAMENODE.RPCHOSTPORT/tmp/test</value>
+        <!-- Example: <value>hdfs://localhost:8020/tmp/test</value> -->
+    </property>
+    <property>
+        <name>user.name</name>
+        <value>mapred</value>
+    </property>
+    <property>
+        <name>inputDir</name>
+        <value>/tmp/test/input</value>
+    </property>
+    <property>
+        <name>outputDir</name>
+        <value>/tmp/test/output</value>
+    </property>
+</configuration>

Added: knox/site/books/knox-1-3-0/workflow-definition.xml
URL: http://svn.apache.org/viewvc/knox/site/books/knox-1-3-0/workflow-definition.xml?rev=1850181&view=auto
==============================================================================
--- knox/site/books/knox-1-3-0/workflow-definition.xml (added)
+++ knox/site/books/knox-1-3-0/workflow-definition.xml Wed Jan  2 17:31:29 2019
@@ -0,0 +1,36 @@
+<?xml version="1.0" encoding="UTF-8"?>
+<!--
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+    http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing, software
+distributed under the License is distributed on an "AS IS" BASIS,
+WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+See the License for the specific language governing permissions and
+limitations under the License.
+-->
+<workflow-app xmlns="uri:oozie:workflow:0.2" name="wordcount-workflow">
+    <start to="root"/>
+    <action name="root">
+        <java>
+            <job-tracker>${jobTracker}</job-tracker>
+            <name-node>${nameNode}</name-node>
+            <main-class>org.apache.hadoop.examples.WordCount</main-class>
+            <arg>${inputDir}</arg>
+            <arg>${outputDir}</arg>
+        </java>
+        <ok to="end"/>
+        <error to="fail"/>
+    </action>
+    <kill name="fail">
+        <message>Java failed, error message[${wf:errorMessage(wf:lastErrorNode())}]</message>
+    </kill>
+    <end name="end"/>
+</workflow-app>
\ No newline at end of file

Added: knox/trunk/books/1.3.0/admin_api.md
URL: http://svn.apache.org/viewvc/knox/trunk/books/1.3.0/admin_api.md?rev=1850181&view=auto
==============================================================================
--- knox/trunk/books/1.3.0/admin_api.md (added)
+++ knox/trunk/books/1.3.0/admin_api.md Wed Jan  2 17:31:29 2019
@@ -0,0 +1,523 @@
+<!---
+   Licensed to the Apache Software Foundation (ASF) under one or more
+   contributor license agreements.  See the NOTICE file distributed with
+   this work for additional information regarding copyright ownership.
+   The ASF licenses this file to You under the Apache License, Version 2.0
+   (the "License"); you may not use this file except in compliance with
+   the License.  You may obtain a copy of the License at
+
+       http://www.apache.org/licenses/LICENSE-2.0
+
+   Unless required by applicable law or agreed to in writing, software
+   distributed under the License is distributed on an "AS IS" BASIS,
+   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+   See the License for the specific language governing permissions and
+   limitations under the License.
+--->
+
+### Admin API
+
+Access to the administrator functions of Knox are provided by the Admin REST API.
+
+#### Admin API URL
+
+The URL mapping for the Knox Admin API is:
+
+| ------- | -----------------------------------------------------------------------------   |
+| GatewayAPI | `https://{gateway-host}:{gateway-port}/{gateway-path}/admin/api/v1`				|   
+
+Please note that to access this API, the user attempting to connect must have admin credentials configured on the LDAP Server
+
+
+##### API Documentation 
+
+<table>
+  <thead>
+    <th>Resource</th>
+    <th>Operation</th>
+    <th>Description</th>
+  </thead>
+  <tr>
+    <td>version</td>
+    <td>GET</td>
+    <td>Get the gateway version and the associated version hash</td>
+  </tr>
+  <tr>
+    <td>&nbsp;</td>
+    <td>Example Request</td>
+    <td><pre>curl -iku admin:admin-password {GatewayAPI}/version -H Accept:application/json</pre></td>
+  </tr>
+  <tr>
+    <td>&nbsp;</td>
+    <td>Example Response</td>
+    <td>
+      <pre>
+{
+  "ServerVersion" : {
+    "version" : "VERSION_ID",
+    "hash" : "VERSION_HASH"
+  }
+}     </pre>
+    </td>
+  </tr>
+
+  <tr>
+    <td>topologies</td>
+    <td>GET</td>
+    <td>Get an enumeration of the topologies currently deployed in the gateway.</td>
+  </tr>
+  <tr>
+    <td>&nbsp;</td>
+    <td>Example Request</td>
+    <td><pre>curl -iku admin:admin-password {GatewayAPI}/topologies -H Accept:application/json</pre></td>
+  </tr>
+  <tr>
+    <td>&nbsp;</td>
+    <td>Example Response</td>
+    <td>
+      <pre>
+{
+   "topologies" : {
+      "topology" : [ {
+         "name" : "admin",
+         "timestamp" : "1501508536000",
+         "uri" : "https://localhost:8443/gateway/admin",
+         "href" : "https://localhost:8443/gateway/admin/api/v1/topologies/admin"
+      }, {
+         "name" : "sandbox",
+         "timestamp" : "1501508536000",
+         "uri" : "https://localhost:8443/gateway/sandbox",
+         "href" : "https://localhost:8443/gateway/admin/api/v1/topologies/sandbox"
+      } ]
+   }
+}     </pre>
+    </td>
+  </tr>
+
+  <tr>
+    <td>topologies/{id}</td>
+    <td>GET</td>
+    <td>Get a JSON representation of the specified topology</td>
+  </tr>
+  <tr>
+    <td>&nbsp;</td>
+    <td>Example Request</td>
+    <td><pre>curl -iku admin:admin-password {GatewayAPI}/topologies/admin -H Accept:application/json</pre></td>
+  </tr>
+  <tr>
+    <td>&nbsp;</td>
+    <td>Example Response</td>
+    <td>
+      <pre>
+{
+  "name": "admin",
+  "providers": [{
+    "enabled": true,
+    "name": "ShiroProvider",
+    "params": {
+      "sessionTimeout": "30",
+      "main.ldapRealm": "org.apache.knox.gateway.shirorealm.KnoxLdapRealm",
+      "main.ldapRealm.userDnTemplate": "uid={0},ou=people,dc=hadoop,dc=apache,dc=org",
+      "main.ldapRealm.contextFactory.url": "ldap://localhost:33389",
+      "main.ldapRealm.contextFactory.authenticationMechanism": "simple",
+      "urls./**": "authcBasic"
+    },
+    "role": "authentication"
+  }, {
+    "enabled": true,
+    "name": "AclsAuthz",
+    "params": {
+      "knox.acl": "admin;*;*"
+    },
+    "role": "authorization"
+  }, {
+    "enabled": true,
+    "name": "Default",
+    "params": {},
+    "role": "identity-assertion"
+  }, {
+    "enabled": true,
+    "name": "static",
+    "params": {
+      "localhost": "sandbox,sandbox.hortonworks.com"
+    },
+    "role": "hostmap"
+  }],
+  "services": [{
+      "name": null,
+      "params": {},
+      "role": "KNOX",
+      "url": null
+  }],
+  "timestamp": 1406672646000,
+  "uri": "https://localhost:8443/gateway/admin"
+}     </pre>
+    </td>
+  </tr>
+
+  <tr>
+    <td>&nbsp;</td>
+    <td>PUT</td>
+    <td>Add (and deploy) a topology</td>
+  </tr>
+  <tr>
+    <td>&nbsp;</td>
+    <td>Example Request</td>
+    <td><pre>curl -iku admin:admin-password {GatewayAPI}/topologies/mytopology \
+     -X PUT \
+     -H Content-Type:application/xml
+     -d "@mytopology.xml"</pre></td>
+  </tr>
+  <tr>
+    <td>&nbsp;</td>
+    <td>Example Response</td>
+    <td>
+        <pre>
+&lt;?xml version="1.0" encoding="UTF-8"?&gt;
+&lt;topology&gt;
+   &lt;uri&gt;https://localhost:8443/gateway/mytopology&lt;/uri&gt;
+   &lt;name&gt;mytopology&lt;/name&gt;
+   &lt;timestamp&gt;1509720338000&lt;/timestamp&gt;
+   &lt;gateway&gt;
+      &lt;provider&gt;
+         &lt;role&gt;authentication&lt;/role&gt;
+         &lt;name&gt;ShiroProvider&lt;/name&gt;
+         &lt;enabled&gt;true&lt;/enabled&gt;
+         &lt;param&gt;
+            &lt;name&gt;sessionTimeout&lt;/name&gt;
+            &lt;value&gt;30&lt;/value&gt;
+         &lt;/param&gt;
+         &lt;param&gt;
+            &lt;name&gt;main.ldapRealm&lt;/name&gt;
+            &lt;value&gt;org.apache.knox.gateway.shirorealm.KnoxLdapRealm&lt;/value&gt;
+         &lt;/param&gt;
+         &lt;param&gt;
+            &lt;name&gt;main.ldapContextFactory&lt;/name&gt;
+            &lt;value&gt;org.apache.knox.gateway.shirorealm.KnoxLdapContextFactory&lt;/value&gt;
+         &lt;/param&gt;
+         &lt;param&gt;
+            &lt;name&gt;main.ldapRealm.contextFactory&lt;/name&gt;
+            &lt;value&gt;$ldapContextFactory&lt;/value&gt;
+         &lt;/param&gt;
+         &lt;param&gt;
+            &lt;name&gt;main.ldapRealm.userDnTemplate&lt;/name&gt;
+            &lt;value&gt;uid={0},ou=people,dc=hadoop,dc=apache,dc=org&lt;/value&gt;
+         &lt;/param&gt;
+         &lt;param&gt;
+            &lt;name&gt;main.ldapRealm.contextFactory.url&lt;/name&gt;
+            &lt;value&gt;ldap://localhost:33389&lt;/value&gt;
+         &lt;/param&gt;
+         &lt;param&gt;
+            &lt;name&gt;main.ldapRealm.contextFactory.authenticationMechanism&lt;/name&gt;
+            &lt;value&gt;simple&lt;/value&gt;
+         &lt;/param&gt;
+         &lt;param&gt;
+            &lt;name&gt;urls./**&lt;/name&gt;
+            &lt;value&gt;authcBasic&lt;/value&gt;
+         &lt;/param&gt;
+      &lt;/provider&gt;
+      &lt;provider&gt;
+         &lt;role&gt;identity-assertion&lt;/role&gt;
+         &lt;name&gt;Default&lt;/name&gt;
+         &lt;enabled&gt;true&lt;/enabled&gt;
+      &lt;/provider&gt;
+      &lt;provider&gt;
+         &lt;role&gt;hostmap&lt;/role&gt;
+         &lt;name&gt;static&lt;/name&gt;
+         &lt;enabled&gt;true&lt;/enabled&gt;
+         &lt;param&gt;
+            &lt;name&gt;localhost&lt;/name&gt;
+            &lt;value&gt;sandbox,sandbox.hortonworks.com&lt;/value&gt;
+         &lt;/param&gt;
+      &lt;/provider&gt;
+   &lt;/gateway&gt;
+   &lt;service&gt;
+      &lt;role&gt;NAMENODE&lt;/role&gt;
+      &lt;url&gt;hdfs://localhost:8020&lt;/url&gt;
+   &lt;/service&gt;
+   &lt;service&gt;
+      &lt;role&gt;JOBTRACKER&lt;/role&gt;
+      &lt;url&gt;rpc://localhost:8050&lt;/url&gt;
+   &lt;/service&gt;
+   &lt;service&gt;
+      &lt;role&gt;WEBHDFS&lt;/role&gt;
+      &lt;url&gt;http://localhost:50070/webhdfs&lt;/url&gt;
+   &lt;/service&gt;
+   &lt;service&gt;
+      &lt;role&gt;WEBHCAT&lt;/role&gt;
+      &lt;url&gt;http://localhost:50111/templeton&lt;/url&gt;
+   &lt;/service&gt;
+   &lt;service&gt;
+      &lt;role&gt;OOZIE&lt;/role&gt;
+      &lt;url&gt;http://localhost:11000/oozie&lt;/url&gt;
+   &lt;/service&gt;
+   &lt;service&gt;
+      &lt;role&gt;WEBHBASE&lt;/role&gt;
+      &lt;url&gt;http://localhost:60080&lt;/url&gt;
+   &lt;/service&gt;
+   &lt;service&gt;
+      &lt;role&gt;HIVE&lt;/role&gt;
+      &lt;url&gt;http://localhost:10001/cliservice&lt;/url&gt;
+   &lt;/service&gt;
+   &lt;service&gt;
+      &lt;role&gt;RESOURCEMANAGER&lt;/role&gt;
+      &lt;url&gt;http://localhost:8088/ws&lt;/url&gt;
+   &lt;/service&gt;
+&lt;/topology&gt;</pre>
+    </td>
+  </tr>
+
+  <tr>
+    <td>&nbsp;</td>
+    <td>DELETE</td>
+    <td>Delete (and undeploy) a topology</td>
+  </tr>
+  <tr>
+    <td>&nbsp;</td>
+    <td>Example Request</td>
+    <td><pre>curl -iku admin:admin-password {GatewayAPI}/topologies/mytopology -X DELETE</pre></td>
+  </tr>
+  <tr>
+    <td>&nbsp;</td>
+    <td>Example Response</td>
+    <td><pre>{ "deleted" : true }</pre></td>
+  </tr>
+
+  <tr>
+    <td>providerconfig</td>
+    <td>GET</td>
+    <td>Get an enumeration of the shared provider configurations currently deployed to the gateway.</td>
+  </tr>
+  <tr>
+    <td>&nbsp;</td>
+    <td>Example Request</td>
+    <td><pre>curl -iku admin:admin-password {GatewayAPI}/providerconfig</pre></td>
+  </tr>
+  <tr>
+    <td>&nbsp;</td>
+    <td>Example Response</td>
+    <td>
+      <pre>
+{
+  "href" : "https://localhost:8443/gateway/admin/api/v1/providerconfig",
+  "items" : [ {
+    "href" : "https://localhost:8443/gateway/admin/api/v1/providerconfig/myproviders",
+    "name" : "myproviders.xml"
+  },{
+   "href" : "https://localhost:8443/gateway/admin/api/v1/providerconfig/sandbox-providers",
+   "name" : "sandbox-providers.xml"
+  } ]
+}     </pre>
+    </td>
+  </tr>
+
+  <tr>
+    <td>providerconfig/{id}</td>
+    <td>GET</td>
+    <td>Get the XML content of the specified shared provider configuration.</td>
+  </tr>
+  <tr>
+    <td>&nbsp;</td>
+    <td>Example Request</td>
+    <td><pre>curl -iku admin:admin-password {GatewayAPI}/providerconfig/sandbox-providers \
+     -H Accept:application/xml</pre></td>
+  </tr>
+  <tr>
+    <td>&nbsp;</td>
+    <td>Example Response</td>
+    <td>
+      <pre>
+&lt;gateway&gt;
+    &lt;provider&gt;
+        &lt;role&gt;authentication&lt;/role&gt;
+        &lt;name&gt;ShiroProvider&lt;/name&gt;
+        &lt;enabled&gt;true&lt;/enabled&gt;
+        &lt;param&gt;
+            &lt;name&gt;sessionTimeout&lt;/name&gt;
+            &lt;value&gt;30&lt;/value&gt;
+        &lt;/param&gt;
+        &lt;param&gt;
+            &lt;name&gt;main.ldapRealm&lt;/name&gt;
+            &lt;value&gt;org.apache.knox.gateway.shirorealm.KnoxLdapRealm&lt;/value&gt;
+        &lt;/param&gt;
+        &lt;param&gt;
+            &lt;name&gt;main.ldapContextFactory&lt;/name&gt;
+            &lt;value&gt;org.apache.knox.gateway.shirorealm.KnoxLdapContextFactory&lt;/value&gt;
+        &lt;/param&gt;
+        &lt;param&gt;
+            &lt;name&gt;main.ldapRealm.contextFactory&lt;/name&gt;
+            &lt;value&gt;$ldapContextFactory&lt;/value&gt;
+        &lt;/param&gt;
+        &lt;param&gt;
+            &lt;name&gt;main.ldapRealm.userDnTemplate&lt;/name&gt;
+            &lt;value&gt;uid={0},ou=people,dc=hadoop,dc=apache,dc=org&lt;/value&gt;
+        &lt;/param&gt;
+        &lt;param&gt;
+            &lt;name&gt;main.ldapRealm.contextFactory.url&lt;/name&gt;
+            &lt;value&gt;ldap://localhost:33389&lt;/value&gt;
+        &lt;/param&gt;
+        &lt;param&gt;
+            &lt;name&gt;main.ldapRealm.contextFactory.authenticationMechanism&lt;/name&gt;
+            &lt;value&gt;simple&lt;/value&gt;
+        &lt;/param&gt;
+        &lt;param&gt;
+            &lt;name&gt;urls./**&lt;/name&gt;
+            &lt;value&gt;authcBasic&lt;/value&gt;
+        &lt;/param&gt;
+    &lt;/provider&gt;
+
+    &lt;provider&gt;
+        &lt;role&gt;identity-assertion&lt;/role&gt;
+        &lt;name&gt;Default&lt;/name&gt;
+        &lt;enabled&gt;true&lt;/enabled&gt;
+    &lt;/provider&gt;
+
+    &lt;provider&gt;
+        &lt;role&gt;hostmap&lt;/role&gt;
+        &lt;name&gt;static&lt;/name&gt;
+        &lt;enabled&gt;true&lt;/enabled&gt;
+        &lt;param&gt;
+            &lt;name&gt;localhost&lt;/name&gt;
+            &lt;value&gt;sandbox,sandbox.hortonworks.com&lt;/value&gt;
+        &lt;/param&gt;
+    &lt;/provider&gt;
+&lt;/gateway&gt;</pre>
+    </td>
+  </tr>
+  </tr>
+    <td>&nbsp;</td>
+    <td>PUT</td>
+    <td>Add a shared provider configuration.</td>
+  </tr>
+  <tr>
+    <td>&nbsp;</td>
+    <td>Example Request</td>
+    <td><pre>curl -iku admin:admin-password {GatewayAPI}/providerconfig/sandbox-providers \
+     -X PUT \ 
+     -H Content-Type:application/xml \
+     -d "@sandbox-providers.xml"</pre></td>
+  </tr>
+  <tr>
+    <td>&nbsp;</td>
+    <td>Example Response</td>
+    <td><pre>HTTP 201 Created</pre></td>
+  </tr>
+  <tr>
+    <td>&nbsp;</td>
+    <td>DELETE</td>
+    <td>Delete a shared provider configuration</td>
+  </tr>
+  <tr>
+    <td>&nbsp;</td>
+    <td>Example Request</td>
+    <td><pre>curl -iku admin:admin-password {GatewayAPI}/providerconfig/sandbox-providers -X DELETE</pre></td>
+  </tr>
+  <tr>
+    <td>&nbsp;</td>
+    <td>Example Response</td>
+    <td>
+      <pre>{ "deleted" : "provider config sandbox-providers" }</pre>
+    </td>
+  </tr>
+
+  <tr>
+    <td>descriptors</td>
+    <td>GET</td>
+    <td>Get an enumeration of the simple descriptors currently deployed to the gateway.</td>
+  </tr>
+  <tr>
+    <td>&nbsp;</td>
+    <td>Example Request</td>
+    <td><pre>curl -iku admin:admin-password {GatewayAPI}/descriptors -H Accept:application/json</pre></td>
+  </tr>
+  <tr>
+    <td>&nbsp;</td>
+    <td>Example Response</td>
+    <td>
+      <pre>
+{
+   "href" : "https://localhost:8443/gateway/admin/api/v1/descriptors",
+   "items" : [ {
+      "href" : "https://localhost:8443/gateway/admin/api/v1/descriptors/docker-sandbox",
+      "name" : "docker-sandbox.json"
+   }, {
+      "href" : "https://localhost:8443/gateway/admin/api/v1/descriptors/mytopology",
+      "name" : "mytopology.yml"
+   } ]
+}     </pre>
+    </td>
+  </tr>
+
+  <tr>
+    <td>descriptors/{id}</td>
+    <td>GET</td>
+    <td>Get the content of the specified descriptor.</td>
+  </tr>
+  <tr>
+    <td>&nbsp;</td>
+    <td>Example Request</td>
+    <td><pre>curl -iku admin:admin-password {GatewayAPI}/descriptors/docker-sandbox \
+     -H Accept:application/json</pre></td>
+  </tr>
+  <tr>
+    <td>&nbsp;</td>
+    <td>Example Response</td>
+    <td>
+      <pre>
+{
+  "discovery-type":"AMBARI",
+  "discovery-address":"http://sandbox.hortonworks.com:8080",
+  "provider-config-ref":"sandbox-providers",
+  "cluster":"Sandbox",
+  "services":[
+    {"name":"NAMENODE"},
+    {"name":"JOBTRACKER"},
+    {"name":"WEBHDFS"},
+    {"name":"WEBHCAT"},
+    {"name":"OOZIE"},
+    {"name":"WEBHBASE"},
+    {"name":"HIVE"},
+    {"name":"RESOURCEMANAGER"} ]
+}    </pre>
+    </td>
+  </tr>
+  <tr>
+    <td>&nbsp;</td>
+    <td>PUT</td>
+    <td>Add a simple descriptor (and generate and deploy a full topology descriptor).</td>
+  </tr>
+  <tr>
+    <td>&nbsp;</td>
+    <td>Example Request</td>
+    <td><pre>curl -iku admin:admin-password {GatewayAPI}/descriptors/docker-sandbox \
+     -X PUT \
+     -H Content-Type:application/json \
+     -d "@docker-sandbox.json"</pre></td>
+  </tr>
+  <tr>
+    <td>&nbsp;</td>
+    <td>Example Response</td>
+    <td><pre>HTTP 201 Created</pre></td>
+  </tr>
+  <tr>
+    <td>&nbsp;</td>
+    <td>DELETE</td>
+    <td>Delete a simple descriptor (and undeploy the associated topology)</td>
+  </tr>
+  <tr>
+    <td>&nbsp;</td>
+    <td>Example Request</td>
+    <td><pre>curl -iku admin:admin-password {GatewayAPI}/descriptors/docker-sandbox -X DELETE</pre></td>
+  <tr>
+  </tr>
+    <td>&nbsp;</td>
+    <td>Example Response</td>
+    <td>
+      <pre>{ "deleted" : "descriptor docker-sandbox" }</pre>
+    </td>
+  </tr>
+
+</table>
+
+
+

Added: knox/trunk/books/1.3.0/admin_ui.md
URL: http://svn.apache.org/viewvc/knox/trunk/books/1.3.0/admin_ui.md?rev=1850181&view=auto
==============================================================================
--- knox/trunk/books/1.3.0/admin_ui.md (added)
+++ knox/trunk/books/1.3.0/admin_ui.md Wed Jan  2 17:31:29 2019
@@ -0,0 +1,210 @@
+<!---
+   Licensed to the Apache Software Foundation (ASF) under one or more
+   contributor license agreements.  See the NOTICE file distributed with
+   this work for additional information regarding copyright ownership.
+   The ASF licenses this file to You under the Apache License, Version 2.0
+   (the "License"); you may not use this file except in compliance with
+   the License.  You may obtain a copy of the License at
+
+       http://www.apache.org/licenses/LICENSE-2.0
+
+   Unless required by applicable law or agreed to in writing, software
+   distributed under the License is distributed on an "AS IS" BASIS,
+   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+   See the License for the specific language governing permissions and
+   limitations under the License.
+--->
+
+### Admin UI ###
+
+The Admin UI is a web application hosted by Knox, which provides the ability to manage provider configurations, descriptors, and topologies.
+
+As an authoring facility, it eliminates the need for ssh/scp access to the Knox host(s) to effect topology changes.<br>
+Furthermore, using the Admin UI simplifies the management of topologies in Knox HA deployments by eliminating the need to copy files to multiple Knox hosts.
+
+
+#### Admin UI URL ####
+
+The URL mapping for the Knox Admin UI is:
+
+| ------- | ----------------------------------------------------------------------------------------------  |
+| Gateway | `https://{gateway-host}:{gateway-port}/{gateway-path}/manager/admin-ui/` |   
+
+
+##### Authentication
+
+The admin UI is deployed using the __manager__ topology. The out-of-box authentication mechanism is KNOXSSO, backed by the demo LDAP server.
+ Only someone in the __admin__ role can access the UI functionality.
+ 
+##### Basic Navigation
+Initially, the Admin UI presents the types of resources which can be managed: [__Provider Configurations__](#Provider+Configurations), [__Descriptors__](#Descriptors), and [__Topologies__](#Topologies).
+
+<img src="adminui/image1.png" style="width:6.5in;height:3.28403in" />
+
+Selecting a resource type yields a listing of the existing resources of that type in the adjacent column, and selecting an individual resource
+presents the details of that selected resource.
+
+For the provider configuration and descriptor resources types, the <img src="adminui/plus-icon.png" style="width:20px;height:20px;vertical-align:bottom"/>
+icon next to the resource list header is the trigger for the respective facility for creating a new resource of that type.<br>
+Modification options, including deletion, are available from the detail view for an individual resource.
+
+
+##### Provider Configurations
+
+The Admin UI lists the provider configurations currently deployed to Knox.
+
+By choosing a particular provider configuration from the list, its details can be viewed and edited.<br>
+The provider configuration can also be deleted (as long as there are no referencing descriptors).
+
+By default, there is a provider configuration named __*default-providers*__.
+
+<img src="adminui/image2.png" style="width:6.5in;height:3.76597in" />
+
+###### Editing Provider Configurations
+For each provider in a given provider configuration, the attributes can be modified:
+
+* The provider can be enabled/disabled
+* Parameters can be added (<img src="adminui/plus-icon.png" style="width:20px;height:20px;vertical-align:bottom"/>) or removed (<img src="adminui/x-icon.png" style="height:12px;vertical-align:middle"/>)
+* Parameter values can be modified (by clicking on the value)
+  <img src="adminui/image21.png"/>
+
+<br>
+To persist changes, the <img src="adminui//save-icon.png" style="height:32px;vertical-align:bottom"> button must be clicked. To revert *unsaved* changes, click the <img src="adminui//undo-icon.png" style="height:32px;vertical-align:bottom"> button or simply choose another resource.
+<br>
+
+###### Create Provider Configurations
+
+The Admin UI provides the ability to define new provider configurations, which can subsequently be referenced by one or more descriptors.
+
+These provider configurations can be created based on the functionality needed, rather than requiring intimate knowledge of the various provider names
+and their respective parameter names.
+
+A provider configuration is a named set of providers. The wizard allows an administrator to specify the name, and add providers to it.
+
+<img src="adminui/image3.png" style="width:6.5in;height:3.11319in" />
+
+To add a provider, first a category must be chosen.
+
+<img src="adminui/image4.png" style="width:6.5in;height:2.27917in" />
+
+After choosing a category, the type within that category must be selected.
+
+<img src="adminui/image5.png" style="width:6.5in;height:3.19097in" />
+
+Finally, for the selected type, the type-specific parameter values can be specified.
+
+<img src="adminui/image6.png" style="width:6.5in;height:2.74167in" />
+
+After adding a provider, others can be added similarly by way of the __Add Provider__ button.
+
+<img src="adminui/image7.png" style="width:6.5in;height:1.99792in" />
+
+###### Composite Provider Types
+
+The wizard for some provider types, such as the HA provider, behave a little differently than the other provider types.
+
+For example, when you choose the HA provider category, you subsequently choose a service role (e.g., WEBHDFS), and specify the parameter values for that service role's entry in the HA provider.
+
+<img src="adminui/image8.png" style="width:6.5in;height:1.34028in" />
+
+<img src="adminui/image9.png" style="width:6.5in;height:3.36458in" />
+
+If multiple services are configured in this way, the result is still a single HA provider, which contains all of the service role configurations.
+
+<img src="adminui/image10.png" style="width:6.5in;height:2.20208in" />
+
+###### Persisting the New Provider Configuration
+
+After adding all the desired providers to the new configuration, choosing <img src="adminui/ok-button.png" style="height:24px;vertical-align:bottom"/> persists it.
+
+<img src="adminui/image11.png" style="width:6.25in;height:6.95833in" />
+
+
+##### Descriptors
+
+A descriptor is essentially a named set of service roles to be proxied with a provider configuration reference.
+The Admin UI lists the descriptors currently deployed to Knox.
+
+By choosing a particular descriptor from the list, its details can be viewed and edited. The provider configuration can also be deleted.
+
+Modifications to descriptors will result in topology changes. When a descriptor is saved or deleted, the corresponding topology is \[re\]generated or deleted/undeployed respectively.
+
+<img src="adminui/image12.png" style="width:6.5in;height:2.06319in" />
+
+<img src="adminui/image13.png" style="width:6.5in;height:2.81181in" />
+
+<img src="adminui/image14.png" style="width:6.5in;height:3.50556in" />
+
+###### Create Descriptors
+
+The Admin UI provides the ability to define new descriptors, which result in the generation and deployment of corresponding topologies.
+
+The __new descriptor__ dialog provides the ability to specify the name, which will also be the name of the resulting topology. It also
+allows one or more supported service roles to be selected for inclusion.
+
+<img src="adminui/image15.png" style="width:6.5in;height:3.82361in" />
+
+The provider configuration reference can entered manually, or the provider configuration selector can be used, to specify the name of an
+existing provider configuration.
+
+<img src="adminui/image16.png" style="width:6.5in;height:3.88125in" />
+
+Optionally, discovery details can also be specified to direct Knox to discover the endpoints for the declared service roles from the Ambari-managed
+target cluster.
+
+<img src="adminui/image17.png" style="width:6.5in;height:5.24167in" />
+
+Choosing <img src="adminui/ok-button.png" style="height:24px;vertical-align:bottom"/> results in the persistence of the descriptor, and subsequently, the generation and deployment of the associated topology.
+
+###### Service Discovery
+
+Descriptors are a means to *declaratively* specify which services should be proxied by a particular topology, allowing Knox to interrogate Ambari
+to determine the endpoint URLs for those declared services. The Service Discovery options tell Knox how to connect to the desired Ambari cluster
+to perform this endpoint discovery.
+
+*Address*
+
+This property specifies the address of the Ambari instance managing the cluster hosting the services whose endpoints are to be discovered.
+
+*Cluster*
+
+This property specifies from which of the clusters, among those being managed by the specified Ambari instance, the service endpoints should be determined.
+
+*Username*
+
+This is the identity of the Ambari user (assigned at least the *Cluster User* role), which will be used to get service configuration details from Ambari.
+
+*Password Alias*
+
+This is the Knox alias whose value is the password associated with the specified username.
+
+This alias must have been defined prior to specifying it in a descriptor, or else the service discovery will fail for authentication reasons.
+
+<img src="adminui/image18.png" style="width:6.5in;height:3.04097in" />
+
+##### Topologies
+
+The Admin UI allows an administrator to view, modify, duplicate and delete topologies which are currently deployed to the Knox instance.
+Changes to a topology results in the [re]deployment of that topology, and deleting a topology results in its undeployment.
+
+<img src="adminui/image19.png" style="width:6.5in;height:3.1625in" />
+
+<img src="adminui/image20.png" style="width:6.5in;height:3.38889in" />
+
+###### Read-Only Protections
+
+Topologies which are generated from descriptors are treated as read-only in the Admin UI. This is to avoid the potential confusion resulting from an administrator directly editing
+a generated topology only to have those changes overwritten by a regeneration of that same topology because the source descriptor or provider configuration changed.
+
+
+##### Knox HA Considerations
+
+If the Knox instance which is hosting the Admin UI is configured for [remote configuration monitoring](#Remote+Configuration+Monitor), then provider configuration and descriptor changes will
+be persisted in the configured ZooKeeper ensemble. Then, every Knox instance which is also configured to monitor configuration in this same ZooKeeper will apply
+those changes, and [re]generate/[re]deploy the affected topologies. In this way, Knox HA deployments can be managed by making changes once, and from any of the
+Knox instances.
+
+
+
+<br>
+

Added: knox/trunk/books/1.3.0/book.md
URL: http://svn.apache.org/viewvc/knox/trunk/books/1.3.0/book.md?rev=1850181&view=auto
==============================================================================
--- knox/trunk/books/1.3.0/book.md (added)
+++ knox/trunk/books/1.3.0/book.md Wed Jan  2 17:31:29 2019
@@ -0,0 +1,163 @@
+<!--
+   Licensed to the Apache Software Foundation (ASF) under one or more
+   contributor license agreements.  See the NOTICE file distributed with
+   this work for additional information regarding copyright ownership.
+   The ASF licenses this file to You under the Apache License, Version 2.0
+   (the "License"); you may not use this file except in compliance with
+   the License.  You may obtain a copy of the License at
+
+       http://www.apache.org/licenses/LICENSE-2.0
+
+   Unless required by applicable law or agreed to in writing, software
+   distributed under the License is distributed on an "AS IS" BASIS,
+   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+   See the License for the specific language governing permissions and
+   limitations under the License.
+-->
+
+<<../common/header.md>>
+
+<img src="knox-logo.gif" alt="Knox"/>
+<!-- <img src="apache-logo.gif" alt="Apache"/> -->
+<img src="apache-logo.gif" align="right" alt="Apache"/>
+
+# Apache Knox Gateway 1.3.x User's Guide #
+
+## Table Of Contents ##
+
+* #[Introduction]
+* #[Quick Start]
+* #[Gateway Samples]
+* #[Apache Knox Details]
+    * #[Apache Knox Directory Layout]
+    * #[Supported Services]
+* #[Gateway Details]
+    * #[URL Mapping]
+        * #[Default Topology URLs]
+        * #[Fully Qualified URLs]
+        * #[Topology Port Mapping]
+    * #[Configuration]
+        * #[Gateway Server Configuration]
+        * #[Simplified Topology Descriptors]
+        * #[Externalized Provider Configurations]
+        * #[Sharing HA Providers]
+        * #[Simplified Descriptor Files]
+    * #[Cluster Configuration Monitoring]
+        * #[Remote Configuration Monitor]
+        * #[Remote Configuration Registry Clients]
+        * #[Remote Alias Discovery]
+        * #[Topology Descriptors]
+        * #[Hostmap Provider]
+    * #[Knox CLI]
+    * #[Admin API]
+    * #[X-Forwarded-* Headers Support]
+    * #[Metrics]
+* #[Authentication]
+    * #[Advanced LDAP Authentication]
+    * #[LDAP Authentication Caching]
+    * #[LDAP Group Lookup]
+    * #[PAM based Authentication]
+    * #[HadoopAuth Authentication Provider]
+    * #[Preauthenticated SSO Provider]
+    * #[SSO Cookie Provider]
+    * #[JWT Provider]
+    * #[Pac4j Provider - CAS / OAuth / SAML / OpenID Connect]
+    * #[KnoxSSO Setup and Configuration]
+    * #[KnoxToken Configuration]
+    * #[Mutual Authentication with SSL]
+* #[Authorization]
+* #[Identity Assertion]
+    * #[Default Identity Assertion Provider]
+    * #[Concat Identity Assertion Provider]
+    * #[SwitchCase Identity Assertion Provider]
+    * #[Regular Expression Identity Assertion Provider]
+    * #[Hadoop Group Lookup Provider]
+* #[Secure Clusters]
+* #[High Availability]
+* #[Web App Security Provider]
+    * #[CSRF]
+    * #[CORS]
+    * #[X-Frame-Options]
+    * #[X-Content-Type-Options]
+    * #[HTTP Strict-Transport-Security - HSTS]
+* #[Websocket Support]
+* #[Audit]
+* #[Client Details]
+    * #[Client Quickstart]
+    * #[Client Token Sessions]
+        * #[Server Setup]
+    * #[Client DSL and SDK Details]
+* #[Service Details]
+    * #[WebHDFS]
+    * #[WebHCat]
+    * #[Oozie]
+    * #[HBase]
+    * #[Hive]
+    * #[Yarn]
+    * #[Kafka]
+    * #[Storm]
+    * #[Solr]
+    * #[Avatica]
+    * #[Livy Server]
+    * #[Elasticsearch]
+    * #[Common Service Config]
+    * #[Default Service HA support]
+* #[UI Service Details]
+* #[Admin UI]
+* #[Limitations]
+* #[Troubleshooting]
+* #[Export Controls]
+
+
+## Introduction ##
+
+The Apache Knox Gateway is a system that provides a single point of authentication and access for Apache Hadoop services in a cluster.
+The goal is to simplify Hadoop security for both users (i.e. who access the cluster data and execute jobs) and operators (i.e. who control access and manage the cluster).
+The gateway runs as a server (or cluster of servers) that provide centralized access to one or more Hadoop clusters.
+In general the goals of the gateway are as follows:
+
+* Provide perimeter security for Hadoop REST APIs to make Hadoop security easier to setup and use
+    * Provide authentication and token verification at the perimeter
+    * Enable authentication integration with enterprise and cloud identity management systems
+    * Provide service level authorization at the perimeter
+* Expose a single URL hierarchy that aggregates REST APIs of a Hadoop cluster
+    * Limit the network endpoints (and therefore firewall holes) required to access a Hadoop cluster
+    * Hide the internal Hadoop cluster topology from potential attackers
+
+<<quick_start.md>>
+<<book_getting-started.md>>
+<<book_knox-samples.md>>
+<<book_gateway-details.md>>
+<<book_client-details.md>>
+<<book_service-details.md>>
+<<book_ui_service_details.md>>
+<<admin_ui.md>>
+<<book_limitations.md>>
+<<book_troubleshooting.md>>
+
+
+## Export Controls ##
+
+Apache Knox Gateway includes cryptographic software.
+The country in which you currently reside may have restrictions on the import, possession, use, and/or
+re-export to another country, of encryption software.
+BEFORE using any encryption software, please check your country's laws, regulations and policies concerning the
+import, possession, or use, and re-export of encryption software, to see if this is permitted.
+See http://www.wassenaar.org for more information.
+
+The U.S. Government Department of Commerce, Bureau of Industry and Security (BIS),
+has classified this software as Export Commodity Control Number (ECCN) 5D002.C.1,
+which includes information security software using or performing cryptographic functions with asymmetric algorithms.
+The form and manner of this Apache Software Foundation distribution makes it eligible for export under the
+License Exception ENC Technology Software Unrestricted (TSU) exception
+(see the BIS Export Administration Regulations, Section 740.13) for both object code and source code.
+
+The following provides more details on the included cryptographic software:
+
+* Apache Knox Gateway uses the ApacheDS which in turn uses Bouncy Castle generic encryption libraries.
+* See http://www.bouncycastle.org for more details on Bouncy Castle.
+* See http://directory.apache.org/apacheds for more details on ApacheDS.
+
+
+<<../common/footer.md>>
+

Added: knox/trunk/books/1.3.0/book_client-details.md
URL: http://svn.apache.org/viewvc/knox/trunk/books/1.3.0/book_client-details.md?rev=1850181&view=auto
==============================================================================
--- knox/trunk/books/1.3.0/book_client-details.md (added)
+++ knox/trunk/books/1.3.0/book_client-details.md Wed Jan  2 17:31:29 2019
@@ -0,0 +1,692 @@
+<!---
+   Licensed to the Apache Software Foundation (ASF) under one or more
+   contributor license agreements.  See the NOTICE file distributed with
+   this work for additional information regarding copyright ownership.
+   The ASF licenses this file to You under the Apache License, Version 2.0
+   (the "License"); you may not use this file except in compliance with
+   the License.  You may obtain a copy of the License at
+
+       http://www.apache.org/licenses/LICENSE-2.0
+
+   Unless required by applicable law or agreed to in writing, software
+   distributed under the License is distributed on an "AS IS" BASIS,
+   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+   See the License for the specific language governing permissions and
+   limitations under the License.
+--->
+
+## Client Details ##
+The KnoxShell release artifact provides a small footprint client environment that removes all unnecessary server dependencies, configuration, binary scripts, etc. It is comprised a couple different things that empower different sorts of users.
+
+* A set of SDK type classes for providing access to Hadoop resources over HTTP
+* A Groovy based DSL for scripting access to Hadoop resources based on the underlying SDK classes
+* A KnoxShell Token based Sessions to provide a CLI SSO session for executing multiple scripts
+
+The following sections provide an overview and quickstart for the KnoxShell.
+
+### Client Quickstart ###
+The following installation and setup instructions should get you started with using the KnoxShell very quickly.
+
+1. Download a knoxshell-x.x.x.zip or tar file and unzip it in your preferred location `{GATEWAY_CLIENT_HOME}`
+
+        home:knoxshell-0.12.0 larry$ ls -l
+        total 296
+        -rw-r--r--@  1 larry  staff  71714 Mar 14 14:06 LICENSE
+        -rw-r--r--@  1 larry  staff    164 Mar 14 14:06 NOTICE
+        -rw-r--r--@  1 larry  staff  71714 Mar 15 20:04 README
+        drwxr-xr-x@ 12 larry  staff    408 Mar 15 21:24 bin
+        drwxr--r--@  3 larry  staff    102 Mar 14 14:06 conf
+        drwxr-xr-x+  3 larry  staff    102 Mar 15 12:41 logs
+        drwxr-xr-x@ 18 larry  staff    612 Mar 14 14:18 samples
+        
+    |Directory    | Description |
+    |-------------|-------------|
+    |bin          |contains the main knoxshell jar and related shell scripts|
+    |conf         |only contains log4j config|
+    |logs         |contains the knoxshell.log file|
+    |samples      |has numerous examples to help you get started|
+
+2. cd `{GATEWAY_CLIENT_HOME}`
+3. Get/setup truststore for the target Knox instance or fronting load balancer
+    - if you have access to the server you may use the command `knoxcli.sh export-cert --type JKS`
+    - copy the resulting `gateway-client-identity.jks` to your user home directory
+4. Execute the an example script from the `{GATEWAY_CLIENT_HOME}/samples` directory - for instance:
+    - `bin/knoxshell.sh samples/ExampleWebHdfsLs.groovy`
+    
+            home:knoxshell-0.12.0 larry$ bin/knoxshell.sh samples/ExampleWebHdfsLs.groovy
+            Enter username: guest
+            Enter password:
+            [app-logs, apps, mapred, mr-history, tmp, user]
+
+At this point, you should have seen something similar to the above output - probably with different directories listed. You should get the idea from the above. Take a look at the sample that we ran above:
+
+    import groovy.json.JsonSlurper
+    import org.apache.knox.gateway.shell.Hadoop
+    import org.apache.knox.gateway.shell.hdfs.Hdfs
+
+    import org.apache.knox.gateway.shell.Credentials
+
+    gateway = "https://localhost:8443/gateway/sandbox"
+
+    credentials = new Credentials()
+    credentials.add("ClearInput", "Enter username: ", "user")
+                    .add("HiddenInput", "Enter pas" + "sword: ", "pass")
+    credentials.collect()
+
+    username = credentials.get("user").string()
+    pass = credentials.get("pass").string()
+
+    session = Hadoop.login( gateway, username, pass )
+
+    text = Hdfs.ls( session ).dir( "/" ).now().string
+    json = (new JsonSlurper()).parseText( text )
+    println json.FileStatuses.FileStatus.pathSuffix
+    session.shutdown()
+
+Some things to note about this sample:
+
+1. The gateway URL is hardcoded
+    - Alternatives would be passing it as an argument to the script, using an environment variable or prompting for it with a ClearInput credential collector
+2. Credential collectors are used to gather credentials or other input from various sources. In this sample the HiddenInput and ClearInput collectors prompt the user for the input with the provided prompt text and the values are acquired by a subsequent get call with the provided name value.
+3. The Hadoop.login method establishes a login session of sorts which will need to be provided to the various API classes as an argument.
+4. The response text is easily retrieved as a string and can be parsed by the JsonSlurper or whatever you like
+
+### Client Token Sessions ###
+Building on the Quickstart above we will drill into some of the token session details here and walk through another sample.
+
+Unlike the quickstart, token sessions require the server to be configured in specific ways to allow the use of token sessions/federation.
+
+#### Server Setup ####
+1. KnoxToken service should be added to your `sandbox.xml` topology - see the [KnoxToken Configuration Section] (#KnoxToken+Configuration)
+
+        <service>
+           <role>KNOXTOKEN</role>
+           <param>
+              <name>knox.token.ttl</name>
+              <value>36000000</value>
+           </param>
+           <param>
+              <name>knox.token.audiences</name>
+              <value>tokenbased</value>
+           </param>
+           <param>
+              <name>knox.token.target.url</name>
+              <value>https://localhost:8443/gateway/tokenbased</value>
+           </param>
+        </service>
+
+2. `tokenbased.xml` topology to accept tokens as federation tokens for access to exposed resources with JWTProvider [JWT Provider](#JWT+Provider)
+
+        <provider>
+           <role>federation</role>
+           <name>JWTProvider</name>
+           <enabled>true</enabled>
+           <param>
+               <name>knox.token.audiences</name>
+               <value>tokenbased</value>
+           </param>
+        </provider>
+
+3. Use the KnoxShell token commands to establish and manage your session
+    - bin/knoxshell.sh init https://localhost:8443/gateway/sandbox to acquire a token and cache in user home directory
+    - bin/knoxshell.sh list to display the details of the cached token, the expiration time and optionally the target url
+    - bin/knoxshell destroy to remove the cached session token and terminate the session
+
+4. Execute a script that can take advantage of the token credential collector and target url
+
+        import groovy.json.JsonSlurper
+        import java.util.HashMap
+        import java.util.Map
+        import org.apache.knox.gateway.shell.Credentials
+        import org.apache.knox.gateway.shell.Hadoop
+        import org.apache.knox.gateway.shell.hdfs.Hdfs
+
+        credentials = new Credentials()
+        credentials.add("KnoxToken", "none: ", "token")
+        credentials.collect()
+
+        token = credentials.get("token").string()
+
+        gateway = System.getenv("KNOXSHELL_TOPOLOGY_URL")
+        if (gateway == null || gateway.equals("")) {
+          gateway = credentials.get("token").getTargetUrl()
+        }
+
+        println ""
+        println "*****************************GATEWAY INSTANCE**********************************"
+        println gateway
+        println "*******************************************************************************"
+        println ""
+
+        headers = new HashMap()
+        headers.put("Authorization", "Bearer " + token)
+
+        session = Hadoop.login( gateway, headers )
+
+        if (args.length > 0) {
+          dir = args[0]
+        } else {
+          dir = "/"
+        }
+
+        text = Hdfs.ls( session ).dir( dir ).now().string
+        json = (new JsonSlurper()).parseText( text )
+        statuses = json.get("FileStatuses");
+
+        println statuses
+
+        session.shutdown()
+
+Note the following about the above sample script:
+
+1. Use of the KnoxToken credential collector
+2. Use of the targetUrl from the credential collector
+3. Optional override of the target url with environment variable
+4. The passing of the headers map to the session creation in Hadoop.login
+5. The passing of an argument for the ls command for the path to list or default to "/"
+
+Also note that there is no reason to prompt for username and password as long as the token has not been destroyed or expired.
+There is also no hardcoded endpoint for using the token - it is specified in the token cache or overridden by environment variable.
+
+## Client DSL and SDK Details ##
+
+The lack of any formal SDK or client for REST APIs in Hadoop led to thinking about a very simple client that could help people use and evaluate the gateway.
+The list below outlines the general requirements for such a client.
+
+* Promote the evaluation and adoption of the Apache Knox Gateway
+* Simple to deploy and use on data worker desktops for access to remote Hadoop clusters
+* Simple to extend with new commands both by other Hadoop projects and by the end user
+* Support the notion of a SSO session for multiple Hadoop interactions
+* Support the multiple authentication and federation token capabilities of the Apache Knox Gateway
+* Promote the use of REST APIs as the dominant remote client mechanism for Hadoop services
+* Promote the sense of Hadoop as a single unified product
+* Aligned with the Apache Knox Gateway's overall goals for security
+
+The result is a very simple DSL ([Domain Specific Language](http://en.wikipedia.org/wiki/Domain-specific_language)) of sorts that is used via [Groovy](http://groovy.codehaus.org) scripts.
+Here is an example of a command that copies a file from the local file system to HDFS.
+
+_Note: The variables `session`, `localFile` and `remoteFile` are assumed to be defined._
+
+    Hdfs.put(session).file(localFile).to(remoteFile).now()
+
+*This work is in very early development but is already very useful in its current state.*
+*We are very interested in receiving feedback about how to improve this feature and the DSL in particular.*
+
+A note of thanks to [REST-assured](https://code.google.com/p/rest-assured/) which provides a [Fluent interface](http://en.wikipedia.org/wiki/Fluent_interface) style DSL for testing REST services.
+It served as the initial inspiration for the creation of this DSL.
+
+### Assumptions ###
+
+This document assumes a few things about your environment in order to simplify the examples.
+
+* The JVM is executable as simply `java`.
+* The Apache Knox Gateway is installed and functional.
+* The example commands are executed within the context of the `GATEWAY_HOME` current directory.
+The `GATEWAY_HOME` directory is the directory within the Apache Knox Gateway installation that contains the README file and the bin, conf and deployments directories.
+* A few examples require the use of commands from a standard Groovy installation.  These examples are optional but to try them you will need Groovy [installed](http://groovy.codehaus.org/Installing+Groovy).
+
+
+### Basics ###
+
+In order for secure connections to be made to the Knox gateway server over SSL, the user will need to trust
+the certificate presented by the gateway while connecting. The knoxcli command export-cert may be used to get
+access the gateway-identity cert. It can then be imported into cacerts on the client machine or put into a
+keystore that will be discovered in:
+
+* The user's home directory
+* In a directory specified in an environment variable: `KNOX_CLIENT_TRUSTSTORE_DIR`
+* In a directory specified with the above variable with the keystore filename specified in the variable: `KNOX_CLIENT_TRUSTSTORE_FILENAME`
+* Default password "changeit" or password may be specified in environment variable: `KNOX_CLIENT_TRUSTSTORE_PASS`
+* Or the JSSE system property `javax.net.ssl.trustStore` can be used to specify its location
+
+The DSL requires a shell to interpret the Groovy script.
+The shell can either be used interactively or to execute a script file.
+To simplify use, the distribution contains an embedded version of the Groovy shell.
+
+The shell can be run interactively. Use the command `exit` to exit.
+
+    java -jar bin/shell.jar
+
+When running interactively it may be helpful to reduce some of the output generated by the shell console.
+Use the following command in the interactive shell to reduce that output.
+This only needs to be done once as these preferences are persisted.
+
+    set verbosity QUIET
+    set show-last-result false
+
+Also when running interactively use the `exit` command to terminate the shell.
+Using `^C` to exit can sometimes leaves the parent shell in a problematic state.
+
+The shell can also be used to execute a script by passing a single filename argument.
+
+    java -jar bin/shell.jar samples/ExampleWebHdfsPutGet.groovy
+
+
+### Examples ###
+
+Once the shell can be launched the DSL can be used to interact with the gateway and Hadoop.
+Below is a very simple example of an interactive shell session to upload a file to HDFS.
+
+    java -jar bin/shell.jar
+    knox:000> session = Hadoop.login( "https://localhost:8443/gateway/sandbox", "guest", "guest-password" )
+    knox:000> Hdfs.put( session ).file( "README" ).to( "/tmp/example/README" ).now()
+
+The `knox:000>` in the example above is the prompt from the embedded Groovy console.
+If you output doesn't look like this you may need to set the verbosity and show-last-result preferences as described above in the Usage section.
+
+If you receive an error `HTTP/1.1 403 Forbidden` it may be because that file already exists.
+Try deleting it with the following command and then try again.
+
+    knox:000> Hdfs.rm(session).file("/tmp/example/README").now()
+
+Without using some other tool to browse HDFS it is hard to tell that this command did anything.
+Execute this to get a bit more feedback.
+
+    knox:000> println "Status=" + Hdfs.put( session ).file( "README" ).to( "/tmp/example/README2" ).now().statusCode
+    Status=201
+
+Notice that a different filename is used for the destination.
+Without this an error would have resulted.
+Of course the DSL also provides a command to list the contents of a directory.
+
+    knox:000> println Hdfs.ls( session ).dir( "/tmp/example" ).now().string
+    {"FileStatuses":{"FileStatus":[{"accessTime":1363711366977,"blockSize":134217728,"group":"hdfs","length":19395,"modificationTime":1363711366977,"owner":"guest","pathSuffix":"README","permission":"644","replication":1,"type":"FILE"},{"accessTime":1363711375617,"blockSize":134217728,"group":"hdfs","length":19395,"modificationTime":1363711375617,"owner":"guest","pathSuffix":"README2","permission":"644","replication":1,"type":"FILE"}]}}
+
+It is a design decision of the DSL to not provide type safe classes for various request and response payloads.
+Doing so would provide an undesirable coupling between the DSL and the service implementation.
+It also would make adding new commands much more difficult.
+See the Groovy section below for a variety capabilities and tools for working with JSON and XML to make this easy.
+The example below shows the use of JsonSlurper and GPath to extract content from a JSON response.
+
+    knox:000> import groovy.json.JsonSlurper
+    knox:000> text = Hdfs.ls( session ).dir( "/tmp/example" ).now().string
+    knox:000> json = (new JsonSlurper()).parseText( text )
+    knox:000> println json.FileStatuses.FileStatus.pathSuffix
+    [README, README2]
+
+*In the future, "built-in" methods to slurp JSON and XML may be added to make this a bit easier.*
+*This would allow for the following type of single line interaction:*
+
+    println Hdfs.ls(session).dir("/tmp").now().json().FileStatuses.FileStatus.pathSuffix
+
+Shell sessions should always be ended with shutting down the session.
+The examples above do not touch on it but the DSL supports the simple execution of commands asynchronously.
+The shutdown command attempts to ensures that all asynchronous commands have completed before existing the shell.
+
+    knox:000> session.shutdown()
+    knox:000> exit
+
+All of the commands above could have been combined into a script file and executed as a single line.
+
+    java -jar bin/shell.jar samples/ExampleWebHdfsPutGet.groovy
+
+This would be the content of that script.
+
+    import org.apache.knox.gateway.shell.Hadoop
+    import org.apache.knox.gateway.shell.hdfs.Hdfs
+    import groovy.json.JsonSlurper
+    
+    gateway = "https://localhost:8443/gateway/sandbox"
+    username = "guest"
+    password = "guest-password"
+    dataFile = "README"
+    
+    session = Hadoop.login( gateway, username, password )
+    Hdfs.rm( session ).file( "/tmp/example" ).recursive().now()
+    Hdfs.put( session ).file( dataFile ).to( "/tmp/example/README" ).now()
+    text = Hdfs.ls( session ).dir( "/tmp/example" ).now().string
+    json = (new JsonSlurper()).parseText( text )
+    println json.FileStatuses.FileStatus.pathSuffix
+    session.shutdown()
+    exit
+
+Notice the `Hdfs.rm` command.  This is included simply to ensure that the script can be rerun.
+Without this an error would result the second time it is run.
+
+### Futures ###
+
+The DSL supports the ability to invoke commands asynchronously via the later() invocation method.
+The object returned from the `later()` method is a `java.util.concurrent.Future` parameterized with the response type of the command.
+This is an example of how to asynchronously put a file to HDFS.
+
+    future = Hdfs.put(session).file("README").to("/tmp/example/README").later()
+    println future.get().statusCode
+
+The `future.get()` method will block until the asynchronous command is complete.
+To illustrate the usefulness of this however multiple concurrent commands are required.
+
+    readmeFuture = Hdfs.put(session).file("README").to("/tmp/example/README").later()
+    licenseFuture = Hdfs.put(session).file("LICENSE").to("/tmp/example/LICENSE").later()
+    session.waitFor( readmeFuture, licenseFuture )
+    println readmeFuture.get().statusCode
+    println licenseFuture.get().statusCode
+
+The `session.waitFor()` method will wait for one or more asynchronous commands to complete.
+
+
+### Closures ###
+
+Futures alone only provide asynchronous invocation of the command.
+What if some processing should also occur asynchronously once the command is complete.
+Support for this is provided by closures.
+Closures are blocks of code that are passed into the `later()` invocation method.
+In Groovy these are contained within `{}` immediately after a method.
+These blocks of code are executed once the asynchronous command is complete.
+
+    Hdfs.put(session).file("README").to("/tmp/example/README").later(){ println it.statusCode }
+
+In this example the `put()` command is executed on a separate thread and once complete the `println it.statusCode` block is executed on that thread.
+The `it` variable is automatically populated by Groovy and is a reference to the result that is returned from the future or `now()` method.
+The future example above can be rewritten to illustrate the use of closures.
+
+    readmeFuture = Hdfs.put(session).file("README").to("/tmp/example/README").later() { println it.statusCode }
+    licenseFuture = Hdfs.put(session).file("LICENSE").to("/tmp/example/LICENSE").later() { println it.statusCode }
+    session.waitFor( readmeFuture, licenseFuture )
+
+Again, the `session.waitFor()` method will wait for one or more asynchronous commands to complete.
+
+
+### Constructs ###
+
+In order to understand the DSL there are three primary constructs that need to be understood.
+
+
+#### Session ####
+
+This construct encapsulates the client side session state that will be shared between all command invocations.
+In particular it will simplify the management of any tokens that need to be presented with each command invocation.
+It also manages a thread pool that is used by all asynchronous commands which is why it is important to call one of the shutdown methods.
+
+The syntax associated with this is expected to change. We expect that credentials will not need to be provided to the gateway. Rather it is expected that some form of access token will be used to initialize the session.
+
+
+#### Services ####
+
+Services are the primary extension point for adding new suites of commands.
+The current built-in examples are: Hdfs, Job and Workflow.
+The desire for extensibility is the reason for the slightly awkward `Hdfs.ls(session)` syntax.
+Certainly something more like `session.hdfs().ls()` would have been preferred but this would prevent adding new commands easily.
+At a minimum it would result in extension commands with a different syntax from the "built-in" commands.
+
+The service objects essentially function as a factory for a suite of commands.
+
+
+#### Commands ####
+
+Commands provide the behavior of the DSL.
+They typically follow a Fluent interface style in order to allow for single line commands.
+There are really three parts to each command: Request, Invocation, Response
+
+
+#### Request ####
+
+The request is populated by all of the methods following the "verb" method and the "invoke" method.
+For example in `Hdfs.rm(session).ls(dir).now()` the request is populated between the "verb" method `rm()` and the "invoke" method `now()`.
+
+
+#### Invocation ####
+
+The invocation method controls how the request is invoked.
+Currently supported synchronous and asynchronous invocation.
+The `now()` method executes the request and returns the result immediately.
+The `later()` method submits the request to be executed later and returns a future from which the result can be retrieved.
+In addition `later()` invocation method can optionally be provided a closure to execute when the request is complete.
+See the Futures and Closures sections below for additional detail and examples.
+
+
+#### Response ####
+
+The response contains the results of the invocation of the request.
+In most cases the response is a thin wrapper over the HTTP response.
+In fact many commands will share a single BasicResponse type that only provides a few simple methods.
+
+    public int getStatusCode()
+    public long getContentLength()
+    public String getContentType()
+    public String getContentEncoding()
+    public InputStream getStream()
+    public String getString()
+    public byte[] getBytes()
+    public void close();
+
+Thanks to Groovy these methods can be accessed as attributes.
+In the some of the examples the staticCode was retrieved for example.
+
+    println Hdfs.put(session).rm(dir).now().statusCode
+
+Groovy will invoke the getStatusCode method to retrieve the statusCode attribute.
+
+The three methods `getStream()`, `getBytes()` and `getString()` deserve special attention.
+Care must be taken that the HTTP body is fully read once and only once.
+Therefore one of these methods (and only one) must be called once and only once.
+Calling one of these more than once will cause an error.
+Failing to call one of these methods once will result in lingering open HTTP connections.
+The `close()` method may be used if the caller is not interested in reading the result body.
+Most commands that do not expect a response body will call close implicitly.
+If the body is retrieved via `getBytes()` or `getString()`, the `close()` method need not be called.
+When using `getStream()`, care must be taken to consume the entire body otherwise lingering open HTTP connections will result.
+The `close()` method may be called after reading the body partially to discard the remainder of the body.
+
+
+### Services ###
+
+The built-in supported client DSL for each Hadoop service can be found in the #[Service Details] section.
+
+
+### Extension ###
+
+Extensibility is a key design goal of the KnoxShell and client DSL.
+There are two ways to provide extended functionality for use with the shell.
+The first is to simply create Groovy scripts that use the DSL to perform a useful task.
+The second is to add new services and commands.
+In order to add new service and commands new classes must be written in either Groovy or Java and added to the classpath of the shell.
+Fortunately there is a very simple way to add classes and JARs to the shell classpath.
+The first time the shell is executed it will create a configuration file in the same directory as the JAR with the same base name and a `.cfg` extension.
+
+    bin/shell.jar
+    bin/shell.cfg
+
+That file contains both the main class for the shell as well as a definition of the classpath.
+Currently that file will by default contain the following.
+
+    main.class=org.apache.knox.gateway.shell.Shell
+    class.path=../lib; ../lib/*.jar; ../ext; ../ext/*.jar
+
+Therefore to extend the shell you should copy any new service and command class either to the `ext` directory or if they are packaged within a JAR copy the JAR to the `ext` directory.
+The `lib` directory is reserved for JARs that may be delivered with the product.
+
+Below are samples for the service and command classes that would need to be written to add new commands to the shell.
+These happen to be Groovy source files but could - with very minor changes - be Java files.
+The easiest way to add these to the shell is to compile them directly into the `ext` directory.
+*Note: This command depends upon having the Groovy compiler installed and available on the execution path.*
+
+    groovy -d ext -cp bin/shell.jar samples/SampleService.groovy \
+        samples/SampleSimpleCommand.groovy samples/SampleComplexCommand.groovy
+
+These source files are available in the samples directory of the distribution but are included here for convenience.
+
+
+#### Sample Service (Groovy)
+
+    import org.apache.knox.gateway.shell.Hadoop
+
+    class SampleService {
+
+        static String PATH = "/webhdfs/v1"
+
+        static SimpleCommand simple( Hadoop session ) {
+            return new SimpleCommand( session )
+        }
+
+        static ComplexCommand.Request complex( Hadoop session ) {
+            return new ComplexCommand.Request( session )
+        }
+
+    }
+
+#### Sample Simple Command (Groovy)
+
+    import org.apache.knox.gateway.shell.AbstractRequest
+    import org.apache.knox.gateway.shell.BasicResponse
+    import org.apache.knox.gateway.shell.Hadoop
+    import org.apache.http.client.methods.HttpGet
+    import org.apache.http.client.utils.URIBuilder
+
+    import java.util.concurrent.Callable
+
+    class SimpleCommand extends AbstractRequest<BasicResponse> {
+
+        SimpleCommand( Hadoop session ) {
+            super( session )
+        }
+
+        private String param
+        SimpleCommand param( String param ) {
+            this.param = param
+            return this
+        }
+
+        @Override
+        protected Callable<BasicResponse> callable() {
+            return new Callable<BasicResponse>() {
+                @Override
+                BasicResponse call() {
+                    URIBuilder uri = uri( SampleService.PATH, param )
+                    addQueryParam( uri, "op", "LISTSTATUS" )
+                    HttpGet get = new HttpGet( uri.build() )
+                    return new BasicResponse( execute( get ) )
+                }
+            }
+        }
+
+    }
+
+
+#### Sample Complex Command (Groovy)
+
+    import com.jayway.jsonpath.JsonPath
+    import org.apache.knox.gateway.shell.AbstractRequest
+    import org.apache.knox.gateway.shell.BasicResponse
+    import org.apache.knox.gateway.shell.Hadoop
+    import org.apache.http.HttpResponse
+    import org.apache.http.client.methods.HttpGet
+    import org.apache.http.client.utils.URIBuilder
+
+    import java.util.concurrent.Callable
+
+    class ComplexCommand {
+
+        static class Request extends AbstractRequest<Response> {
+
+            Request( Hadoop session ) {
+                super( session )
+            }
+
+            private String param;
+            Request param( String param ) {
+                this.param = param;
+                return this;
+            }
+
+            @Override
+            protected Callable<Response> callable() {
+                return new Callable<Response>() {
+                    @Override
+                    Response call() {
+                        URIBuilder uri = uri( SampleService.PATH, param )
+                        addQueryParam( uri, "op", "LISTSTATUS" )
+                        HttpGet get = new HttpGet( uri.build() )
+                        return new Response( execute( get ) )
+                    }
+                }
+            }
+
+        }
+
+        static class Response extends BasicResponse {
+
+            Response(HttpResponse response) {
+                super(response)
+            }
+
+            public List<String> getNames() {
+                return JsonPath.read( string, "\$.FileStatuses.FileStatus[*].pathSuffix" )
+            }
+
+        }
+
+    }
+
+
+### Groovy
+
+The shell included in the distribution is basically an unmodified packaging of the Groovy shell.
+The distribution does however provide a wrapper that makes it very easy to setup the class path for the shell.
+In fact the JARs required to execute the DSL are included on the class path by default.
+Therefore these command are functionally equivalent if you have Groovy installed.
+See below for a description of what is required for JARs required by the DSL from `lib` and `dep` directories.
+
+    java -jar bin/shell.jar samples/ExampleWebHdfsPutGet.groovy
+    groovy -classpath {JARs required by the DSL from lib and dep} samples/ExampleWebHdfsPutGet.groovy
+
+The interactive shell isn't exactly equivalent.
+However the only difference is that the shell.jar automatically executes some additional imports that are useful for the KnoxShell client DSL.
+So these two sets of commands should be functionality equivalent.
+*However there is currently a class loading issue that prevents the groovysh command from working properly.*
+
+    java -jar bin/shell.jar
+
+    groovysh -classpath {JARs required by the DSL from lib and dep}
+    import org.apache.knox.gateway.shell.Hadoop
+    import org.apache.knox.gateway.shell.hdfs.Hdfs
+    import org.apache.knox.gateway.shell.job.Job
+    import org.apache.knox.gateway.shell.workflow.Workflow
+    import java.util.concurrent.TimeUnit
+
+Alternatively, you can use the Groovy Console which does not appear to have the same class loading issue.
+
+    groovyConsole -classpath {JARs required by the DSL from lib and dep}
+
+    import org.apache.knox.gateway.shell.Hadoop
+    import org.apache.knox.gateway.shell.hdfs.Hdfs
+    import org.apache.knox.gateway.shell.job.Job
+    import org.apache.knox.gateway.shell.workflow.Workflow
+    import java.util.concurrent.TimeUnit
+
+The JARs currently required by the client DSL are
+
+    lib/gateway-shell-{GATEWAY_VERSION}.jar
+    dep/httpclient-4.3.6.jar
+    dep/httpcore-4.3.3.jar
+    dep/commons-lang3-3.4.jar
+    dep/commons-codec-1.7.jar
+
+So on Linux/MacOS you would need this command
+
+    groovy -cp lib/gateway-shell-0.10.0.jar:dep/httpclient-4.3.6.jar:dep/httpcore-4.3.3.jar:dep/commons-lang3-3.4.jar:dep/commons-codec-1.7.jar samples/ExampleWebHdfsPutGet.groovy
+
+and on Windows you would need this command
+
+    groovy -cp lib/gateway-shell-0.10.0.jar;dep/httpclient-4.3.6.jar;dep/httpcore-4.3.3.jar;dep/commons-lang3-3.4.jar;dep/commons-codec-1.7.jar samples/ExampleWebHdfsPutGet.groovy
+
+The exact list of required JARs is likely to change from release to release so it is recommended that you utilize the wrapper `bin/shell.jar`.
+
+In addition because the DSL can be used via standard Groovy, the Groovy integrations in many popular IDEs (e.g. IntelliJ, Eclipse) can also be used.
+This makes it particularly nice to develop and execute scripts to interact with Hadoop.
+The code-completion features in modern IDEs in particular provides immense value.
+All that is required is to add the `gateway-shell-{GATEWAY_VERSION}.jar` to the projects class path.
+
+There are a variety of Groovy tools that make it very easy to work with the standard interchange formats (i.e. JSON and XML).
+In Groovy the creation of XML or JSON is typically done via a "builder" and parsing done via a "slurper".
+In addition once JSON or XML is "slurped" the GPath, an XPath like feature build into Groovy can be used to access data.
+
+* XML
+    * Markup Builder [Overview](http://groovy.codehaus.org/Creating+XML+using+Groovy's+MarkupBuilder), [API](http://groovy.codehaus.org/api/groovy/xml/MarkupBuilder.html)
+    * XML Slurper [Overview](http://groovy.codehaus.org/Reading+XML+using+Groovy's+XmlSlurper), [API](http://groovy.codehaus.org/api/groovy/util/XmlSlurper.html)
+    * XPath [Overview](http://groovy.codehaus.org/GPath), [API](http://docs.oracle.com/javase/1.5.0/docs/api/javax/xml/xpath/XPath.html)
+* JSON
+    * JSON Builder [API](http://groovy.codehaus.org/gapi/groovy/json/JsonBuilder.html)
+    * JSON Slurper [API](http://groovy.codehaus.org/gapi/groovy/json/JsonSlurper.html)
+    * JSON Path [API](https://code.google.com/p/json-path/)
+    * GPath [Overview](http://groovy.codehaus.org/GPath)
+

Added: knox/trunk/books/1.3.0/book_gateway-details.md
URL: http://svn.apache.org/viewvc/knox/trunk/books/1.3.0/book_gateway-details.md?rev=1850181&view=auto
==============================================================================
--- knox/trunk/books/1.3.0/book_gateway-details.md (added)
+++ knox/trunk/books/1.3.0/book_gateway-details.md Wed Jan  2 17:31:29 2019
@@ -0,0 +1,106 @@
+<!---
+   Licensed to the Apache Software Foundation (ASF) under one or more
+   contributor license agreements.  See the NOTICE file distributed with
+   this work for additional information regarding copyright ownership.
+   The ASF licenses this file to You under the Apache License, Version 2.0
+   (the "License"); you may not use this file except in compliance with
+   the License.  You may obtain a copy of the License at
+
+       http://www.apache.org/licenses/LICENSE-2.0
+
+   Unless required by applicable law or agreed to in writing, software
+   distributed under the License is distributed on an "AS IS" BASIS,
+   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+   See the License for the specific language governing permissions and
+   limitations under the License.
+-->
+
+## Gateway Details ##
+
+This section describes the details of the Knox Gateway itself. Including:
+
+* How URLs are mapped between a gateway that services multiple Hadoop clusters and the clusters themselves
+* How the gateway is configured through `gateway-site.xml` and cluster specific topology files
+* How to configure the various policy enforcement provider features such as authentication, authorization, auditing, hostmapping, etc.
+
+### URL Mapping ###
+
+The gateway functions much like a reverse proxy.
+As such, it maintains a mapping of URLs that are exposed externally by the gateway to URLs that are provided by the Hadoop cluster.
+
+#### Default Topology URLs #####
+In order to provide compatibility with the Hadoop Java client and existing CLI tools, the Knox Gateway has provided a feature called the _Default Topology_. This refers to a topology deployment that will be able to route URLs without the additional context that the gateway uses for differentiating from one Hadoop cluster to another. This allows the URLs to match those used by existing clients that may access WebHDFS through the Hadoop file system abstraction.
+
+When a topology file is deployed with a file name that matches the configured default topology name, a specialized mapping for URLs is installed for that particular topology. This allows the URLs that are expected by the existing Hadoop CLIs for WebHDFS to be used in interacting with the specific Hadoop cluster that is represented by the default topology file.
+
+The configuration for the default topology name is found in `gateway-site.xml` as a property called: `default.app.topology.name`.
+
+The default value for this property is empty.
+
+
+When deploying the `sandbox.xml` topology and setting `default.app.topology.name` to `sandbox`, both of the following example URLs work for the same underlying Hadoop cluster:
+
+    https://{gateway-host}:{gateway-port}/webhdfs
+    https://{gateway-host}:{gateway-port}/{gateway-path}/{cluster-name}/webhdfs
+
+These default topology URLs exist for all of the services in the topology.
+
+#### Fully Qualified URLs #####
+Examples of mappings for WebHDFS, WebHCat, Oozie and HBase are shown below.
+These mapping are generated from the combination of the gateway configuration file (i.e. `{GATEWAY_HOME}/conf/gateway-site.xml`) and the cluster topology descriptors (e.g. `{GATEWAY_HOME}/conf/topologies/{cluster-name}.xml`).
+The port numbers shown for the Cluster URLs represent the default ports for these services.
+The actual port number may be different for a given cluster.
+
+* WebHDFS
+    * Gateway: `https://{gateway-host}:{gateway-port}/{gateway-path}/{cluster-name}/webhdfs`
+    * Cluster: `http://{webhdfs-host}:50070/webhdfs`
+* WebHCat (Templeton)
+    * Gateway: `https://{gateway-host}:{gateway-port}/{gateway-path}/{cluster-name}/templeton`
+    * Cluster: `http://{webhcat-host}:50111/templeton}`
+* Oozie
+    * Gateway: `https://{gateway-host}:{gateway-port}/{gateway-path}/{cluster-name}/oozie`
+    * Cluster: `http://{oozie-host}:11000/oozie}`
+* HBase
+    * Gateway: `https://{gateway-host}:{gateway-port}/{gateway-path}/{cluster-name}/hbase`
+    * Cluster: `http://{hbase-host}:8080`
+* Hive JDBC
+    * Gateway: `jdbc:hive2://{gateway-host}:{gateway-port}/;ssl=true;sslTrustStore={gateway-trust-store-path};trustStorePassword={gateway-trust-store-password};transportMode=http;httpPath={gateway-path}/{cluster-name}/hive`
+    * Cluster: `http://{hive-host}:10001/cliservice`
+
+The values for `{gateway-host}`, `{gateway-port}`, `{gateway-path}` are provided via the gateway configuration file (i.e. `{GATEWAY_HOME}/conf/gateway-site.xml`).
+
+The value for `{cluster-name}` is derived from the file name of the cluster topology descriptor (e.g. `{GATEWAY_HOME}/deployments/{cluster-name}.xml`).
+
+The value for `{webhdfs-host}`, `{webhcat-host}`, `{oozie-host}`, `{hbase-host}` and `{hive-host}` are provided via the cluster topology descriptor (e.g. `{GATEWAY_HOME}/conf/topologies/{cluster-name}.xml`).
+
+Note: The ports 50070 (9870 for Hadoop 3.x), 50111, 11000, 8080 and 10001 are the defaults for WebHDFS, WebHCat, Oozie, HBase and Hive respectively.
+Their values can also be provided via the cluster topology descriptor if your Hadoop cluster uses different ports.
+
+Note: The HBase REST API uses port 8080 by default. This often clashes with other running services.
+In the Hortonworks Sandbox, Apache Ambari might be running on this port, so you might have to change it to a different port (e.g. 60080).
+
+<<book_topology_port_mapping.md>>
+<<config.md>>
+<<knox_cli.md>>
+<<admin_api.md>>
+<<x-forwarded-headers.md>>
+<<config_metrics.md>>
+<<config_authn.md>>
+<<config_advanced_ldap.md>>
+<<config_ldap_authc_cache.md>>
+<<config_ldap_group_lookup.md>>
+<<config_pam_authn.md>>
+<<config_id_assertion.md>>
+<<config_authz.md>>
+<<config_kerberos.md>>
+<<config_ha.md>>
+<<config_webappsec_provider.md>>
+<<config_hadoop_auth_provider.md>>
+<<config_preauth_sso_provider.md>>
+<<config_sso_cookie_provider.md>>
+<<config_pac4j_provider.md>>
+<<config_knox_sso.md>>
+<<config_knox_token.md>>
+<<config_mutual_authentication_ssl.md>>
+<<websocket-support.md>>
+<<config_audit.md>>

Added: knox/trunk/books/1.3.0/book_getting-started.md
URL: http://svn.apache.org/viewvc/knox/trunk/books/1.3.0/book_getting-started.md?rev=1850181&view=auto
==============================================================================
--- knox/trunk/books/1.3.0/book_getting-started.md (added)
+++ knox/trunk/books/1.3.0/book_getting-started.md Wed Jan  2 17:31:29 2019
@@ -0,0 +1,95 @@
+<!---
+   Licensed to the Apache Software Foundation (ASF) under one or more
+   contributor license agreements.  See the NOTICE file distributed with
+   this work for additional information regarding copyright ownership.
+   The ASF licenses this file to You under the Apache License, Version 2.0
+   (the "License"); you may not use this file except in compliance with
+   the License.  You may obtain a copy of the License at
+
+       http://www.apache.org/licenses/LICENSE-2.0
+
+   Unless required by applicable law or agreed to in writing, software
+   distributed under the License is distributed on an "AS IS" BASIS,
+   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+   See the License for the specific language governing permissions and
+   limitations under the License.
+--->
+
+## Apache Knox Details ##
+
+This section provides everything you need to know to get the Knox gateway up and running against a Hadoop cluster.
+
+#### Hadoop ####
+
+An existing Hadoop 2.x or 3.x cluster is required for Knox to sit in front of and protect.
+It is possible to use a Hadoop cluster deployed on EC2 but this will require additional configuration not covered here.
+It is also possible to protect access to a services of a Hadoop cluster that is secured with Kerberos.
+This too requires additional configuration that is described in other sections of this guide.
+See #[Supported Services] for details on what is supported for this release.
+
+The instructions that follow assume a few things:
+
+1. The gateway is *not* collocated with the Hadoop clusters themselves.
+2. The host names and IP addresses of the cluster services are accessible by the gateway where ever it happens to be running.
+
+All of the instructions and samples provided here are tailored and tested to work "out of the box" against a [Hortonworks Sandbox 2.x VM][sandbox].
+
+
+#### Apache Knox Directory Layout ####
+
+Knox can be installed by expanding the zip/archive file.
+
+The table below provides a brief explanation of the important files and directories within `{GATEWAY_HOME}`
+
+| Directory                | Purpose |
+| ------------------------ | ------- |
+| conf/                    | Contains configuration files that apply to the gateway globally (i.e. not cluster specific ). |
+| data/                    | Contains security and topology specific artifacts that require read/write access at runtime |
+| conf/topologies/         | Contains topology files that represent Hadoop clusters which the gateway uses to deploy cluster proxies |
+| data/security/           | Contains the persisted master secret and keystore dir |
+| data/security/keystores/ | Contains the gateway identity keystore and credential stores for the gateway and each deployed cluster topology |
+| data/services            | Contains service behavior definitions for the services currently supported. |
+| bin/                     | Contains the executable shell scripts, batch files and JARs for clients and servers. |
+| data/deployments/        | Contains deployed cluster topologies used to protect access to specific Hadoop clusters. |
+| lib/                     | Contains the JARs for all the components that make up the gateway. |
+| dep/                     | Contains the JARs for all of the components upon which the gateway depends. |
+| ext/                     | A directory where user supplied extension JARs can be placed to extends the gateways functionality. |
+| pids/                    | Contains the process ids for running LDAP and gateway servers |
+| samples/                 | Contains a number of samples that can be used to explore the functionality of the gateway. |
+| templates/               | Contains default configuration files that can be copied and customized. |
+| README                   | Provides basic information about the Apache Knox Gateway. |
+| ISSUES                   | Describes significant know issues. |
+| CHANGES                  | Enumerates the changes between releases. |
+| LICENSE                  | Documents the license under which this software is provided. |
+| NOTICE                   | Documents required attribution notices for included dependencies. |
+
+
+### Supported Services ###
+
+This table enumerates the versions of various Hadoop services that have been tested to work with the Knox Gateway.
+
+| Service                | Version     | Non-Secure  | Secure | HA |
+| -----------------------|-------------|-------------|--------|----|
+| WebHDFS                | 2.4.0       | ![y]        | ![y]   |![y]|
+| WebHCat/Templeton      | 0.13.0      | ![y]        | ![y]   |![y]|
+| Oozie                  | 4.0.0       | ![y]        | ![y]   |![y]|
+| HBase                  | 0.98.0      | ![y]        | ![y]   |![y]|
+| Hive (via WebHCat)     | 0.13.0      | ![y]        | ![y]   |![y]|
+| Hive (via JDBC/ODBC)   | 0.13.0      | ![y]        | ![y]   |![y]|
+| Yarn ResourceManager   | 2.5.0       | ![y]        | ![y]   |![n]|
+| Kafka (via REST Proxy) | 0.10.0      | ![y]        | ![y]   |![y]|
+| Storm                  | 0.9.3       | ![y]        | ![n]   |![n]|
+| Solr                   | 5.5+ and 6+ | ![y]        | ![y]   |![y]|
+
+
+### More Examples ###
+
+These examples provide more detail about how to access various Apache Hadoop services via the Apache Knox Gateway.
+
+* #[WebHDFS Examples]
+* #[WebHCat Examples]
+* #[Oozie Examples]
+* #[HBase Examples]
+* #[Hive Examples]
+* #[Yarn Examples]
+* #[Storm Examples]

Added: knox/trunk/books/1.3.0/book_knox-samples.md
URL: http://svn.apache.org/viewvc/knox/trunk/books/1.3.0/book_knox-samples.md?rev=1850181&view=auto
==============================================================================
--- knox/trunk/books/1.3.0/book_knox-samples.md (added)
+++ knox/trunk/books/1.3.0/book_knox-samples.md Wed Jan  2 17:31:29 2019
@@ -0,0 +1,69 @@
+<!---
+   Licensed to the Apache Software Foundation (ASF) under one or more
+   contributor license agreements.  See the NOTICE file distributed with
+   this work for additional information regarding copyright ownership.
+   The ASF licenses this file to You under the Apache License, Version 2.0
+   (the "License"); you may not use this file except in compliance with
+   the License.  You may obtain a copy of the License at
+
+       http://www.apache.org/licenses/LICENSE-2.0
+
+   Unless required by applicable law or agreed to in writing, software
+   distributed under the License is distributed on an "AS IS" BASIS,
+   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+   See the License for the specific language governing permissions and
+   limitations under the License.
+--->
+
+### Gateway Samples ###
+
+The purpose of the samples within the `{GATEWAY_HOME}/samples` directory is to demonstrate the capabilities of the Apache Knox Gateway to provide access to the numerous APIs that are available from the service components of a Hadoop cluster.
+
+Depending on exactly how your Knox installation was done, there will be some number of steps required in order fully install and configure the samples for use.
+
+This section will help describe the assumptions of the samples and the steps to get them to work in a couple of different deployment scenarios.
+
+#### Assumptions of the Samples ####
+
+The samples were initially written with the intent of working out of the box for the various Hadoop demo environments that are deployed as a single node cluster inside of a VM. The following assumptions were made from that context and should be understood in order to get the samples to work in other deployment scenarios:
+
+* That there is a valid java JDK on the PATH for executing the samples
+* The Knox Demo LDAP server is running on localhost and port 33389 which is the default port for the ApacheDS LDAP server.
+* That the LDAP directory in use has a set of demo users provisioned with the convention of username and username"-password" as the password. Most of the samples have some variation of this pattern with "guest" and "guest-password".
+* That the Knox Gateway instance is running on the same machine which you will be running the samples from - therefore "localhost" and that the default port of "8443" is being used.
+* Finally, that there is a properly provisioned sandbox.xml topology in the `{GATEWAY_HOME}/conf/topologies` directory that is configured to point to the actual host and ports of running service components.
+
+#### Steps for Demo Single Node Clusters ####
+
+There should be little to do if anything in a demo environment that has been provisioned with illustrating the use of Apache Knox.
+
+However, the following items will be worth ensuring before you start:
+
+1. The `sandbox.xml` topology is configured properly for the deployed services
+2. That there is a LDAP server running with guest/guest-password user available in the directory
+
+#### Steps for Ambari deployed Knox Gateway ####
+
+Apache Knox instances that are under the management of Ambari are generally assumed not to be demo instances. These instances are in place to facilitate development, testing or production Hadoop clusters.
+
+The Knox samples can however be made to work with Ambari managed Knox instances with a few steps:
+
+1. You need to have SSH access to the environment in order for the localhost assumption within the samples to be valid
+2. The Knox Demo LDAP Server is started - you can start it from Ambari
+3. The `default.xml` topology file can be copied to `sandbox.xml` in order to satisfy the topology name assumption in the samples
+4. Be sure to use an actual Java JRE to run the sample with something like:
+
+    /usr/jdk64/jdk1.7.0_67/bin/java -jar bin/shell.jar samples/ExampleWebHdfsLs.groovy
+
+#### Steps for a manually installed Knox Gateway ####
+
+For manually installed Knox instances, there is really no way for the installer to know how to configure the topology file for you.
+
+Essentially, these steps are identical to the Ambari deployed instance except that #3 should be replaced with the configuration of the out of the box `sandbox.xml` to point the configuration at the proper hosts and ports.
+
+1. You need to have SSH access to the environment in order for the localhost assumption within the samples to be valid.
+2. The Knox Demo LDAP Server is started - you can start it from Ambari
+3. Change the hosts and ports within the `{GATEWAY_HOME}/conf/topologies/sandbox.xml` to reflect your actual cluster service locations.
+4. Be sure to use an actual Java JRE to run the sample with something like:
+
+    /usr/jdk64/jdk1.7.0_67/bin/java -jar bin/shell.jar samples/ExampleWebHdfsLs.groovy



Mime
View raw message