knox-commits mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From lmc...@apache.org
Subject svn commit: r1586861 - in /knox: site/index.html trunk/src/site/markdown/index.md
Date Sat, 12 Apr 2014 15:09:54 GMT
Author: lmccay
Date: Sat Apr 12 15:09:53 2014
New Revision: 1586861

URL: http://svn.apache.org/r1586861
Log:
added overview sections to site

Modified:
    knox/site/index.html
    knox/trunk/src/site/markdown/index.md

Modified: knox/site/index.html
URL: http://svn.apache.org/viewvc/knox/site/index.html?rev=1586861&r1=1586860&r2=1586861&view=diff
==============================================================================
--- knox/site/index.html (original)
+++ knox/site/index.html Sat Apr 12 15:09:53 2014
@@ -189,7 +189,7 @@ limitations under the License. --><div c
   <li>Integrates well with enterprise identity management solutions</li>
   <li>Protects the details of the Hadoop cluster deployment (hosts and ports are hidden
from endusers)</li>
   <li>Simplifies the number of services that clients need to interact with</li>
-</ul><p><img src="http://knox.apache.org/images/knox-overview.gif" alt="alt
text" /></p></div><div class="section"><h2>Overview<a name="Overview"></a></h2><p>The
Knox API Gateway is designed as a reverse proxy with consideration for pluggability in the
areas of<br /> policy enforcement, through providers and the backend services for which
it proxies requests.</p><p>Policy enforcement ranges from authentication/federation,
authorization, audit, dispatch, hostmapping<br /> and content rewrite rules. Policy
is enforced through a chain of providers that are defined within the topology<br />
deployment descriptor for each Hadoop cluster gated by Knox. The cluster definition is also
defined<br /> within the topology deployment descriptor and provides the Knox Gateway
with the layout of the Hadoop<br /> cluster for purposes of routing and translation
between user facing URLs and Hadoop cluster internals.</p><p>Each Hadoop cluster
that is protected by Knox has its set of REST APIs represent
 ed by a single cluster specific<br /> application context path. This allows the Knox
Gateway to both protect multiple Hadoop clusters and present<br /> the REST API consumer
with a single endpoint for access to all of the Hadoop services required, across the<br
/> multiple clusters.</p></div><div class="section"><h2>Authentication<a
name="Authentication"></a></h2><p>Providers with the role of authentication
are responsible for collecting credentials presented by the API<br /> consumer, validating
them and communicating the successful or failed authentication to the client or the<br
/> rest of the provider chain.</p><p>Out of the box, the Knox Gateway provides
the Shiro authentication provider. This is a provider that leverages<br /> the Apache
Shiro project for authenticating BASIC credentials against an LDAP user store. There is support
for<br /> OpenLDAP, ApacheDS and Microsoft Active Directory.</p></div><div
class="section"><h2>Federation/SSO<a name="FederationSSO"></a></h2><p>Fo
 r customers that require credentials to be presented to a limited set of trusted entities
within the enterprise,<br /> the Knox Gateway may be configured to federate the authenticated
identity from an external authentication event.<br /> This is done through providers
with the role of federation. The out of the box federation provider is a simple<br />
mechanism for propagating the identity through HTTP Headers that specify the username and
group for the authenticated<br /> user. This has been built with vendor usecases such
as SiteMinder and IBM Tivoli Access Manager.</p></div><div class="section"><h2>Authorization<a
name="Authorization"></a></h2><p>The authorization role is used by providers
that make access decisions for the requested resources based on the<br /> effective
user identity context. This identity context is determined by the authentication provider
and the identity<br /> assertion provider mapping rules. Evaluation of the identity
contexts user and group principals a
 gainst a set of<br /> access policies is done by the authorization provider in order
to determine whether access should be granted to<br /> the effective user for the requested
resource.</p><p>Out of the box, the Knox Gateway provides an ACL based authorization
provider that evaluates rules that comprise<br /> of username, groups and ip addresses.
These ACLs are bound to and protect resources at the service level.<br /> That is, they
protect access to the Hadoop services themselves based on user, group and remote ip address.</p></div><div
class="section"><h2>Audit<a name="Audit"></a></h2><p>The
ability to determine what actions were taken by whom during some period of time is provided
by the auditing<br /> capabilities of the Knox Gateway. The facility is built on an
extension of the Log4j framework and may be extended<br /> by replacing the out of the
box implementation with another.</p></div>
+</ul><p><img src="http://knox.apache.org/images/knox-overview.gif" alt="alt
text" /></p></div><div class="section"><h2>Overview<a name="Overview"></a></h2><p>The
Knox API Gateway is designed as a reverse proxy with consideration for pluggability in the
areas of<br /> policy enforcement, through providers and the backend services for which
it proxies requests.</p><p>Policy enforcement ranges from authentication/federation,
authorization, audit, dispatch, hostmapping<br /> and content rewrite rules. Policy
is enforced through a chain of providers that are defined within the topology<br />
deployment descriptor for each Hadoop cluster gated by Knox. The cluster definition is also
defined<br /> within the topology deployment descriptor and provides the Knox Gateway
with the layout of the Hadoop<br /> cluster for purposes of routing and translation
between user facing URLs and Hadoop cluster internals.</p><p>Each Hadoop cluster
that is protected by Knox has its set of REST APIs represent
 ed by a single cluster specific<br /> application context path. This allows the Knox
Gateway to both protect multiple Hadoop clusters and present<br /> the REST API consumer
with a single endpoint for access to all of the Hadoop services required, across the<br
/> multiple clusters.</p></div><div class="section"><h2>Supported
Hadoop Services<a name="Supported_Hadoop_Services"></a></h2><p>WebHDFS
(HDFS) Templeton (HCatalog) Stargate (HBase) Oozie Hive/JDBC</p></div><div
class="section"><h2>Authentication<a name="Authentication"></a></h2><p>Providers
with the role of authentication are responsible for collecting credentials presented by the
API<br /> consumer, validating them and communicating the successful or failed authentication
to the client or the<br /> rest of the provider chain.</p><p>Out of the
box, the Knox Gateway provides the Shiro authentication provider. This is a provider that
leverages<br /> the Apache Shiro project for authenticating BASIC credentials against
an LDAP 
 user store. There is support for<br /> OpenLDAP, ApacheDS and Microsoft Active Directory.</p></div><div
class="section"><h2>Federation/SSO<a name="FederationSSO"></a></h2><p>For
customers that require credentials to be presented to a limited set of trusted entities within
the enterprise,<br /> the Knox Gateway may be configured to federate the authenticated
identity from an external authentication event.<br /> This is done through providers
with the role of federation. The out of the box federation provider is a simple<br />
mechanism for propagating the identity through HTTP Headers that specify the username and
group for the authenticated<br /> user. This has been built with vendor usecases such
as SiteMinder and IBM Tivoli Access Manager.</p></div><div class="section"><h2>Authorization<a
name="Authorization"></a></h2><p>The authorization role is used by providers
that make access decisions for the requested resources based on the<br /> effective
user identity context. This identi
 ty context is determined by the authentication provider and the identity<br /> assertion
provider mapping rules. Evaluation of the identity contexts user and group principals against
a set of<br /> access policies is done by the authorization provider in order to determine
whether access should be granted to<br /> the effective user for the requested resource.</p><p>Out
of the box, the Knox Gateway provides an ACL based authorization provider that evaluates rules
that comprise<br /> of username, groups and ip addresses. These ACLs are bound to and
protect resources at the service level.<br /> That is, they protect access to the Hadoop
services themselves based on user, group and remote ip address.</p></div><div
class="section"><h2>Audit<a name="Audit"></a></h2><p>The
ability to determine what actions were taken by whom during some period of time is provided
by the auditing<br /> capabilities of the Knox Gateway. The facility is built on an
extension of the Log4j framework and may be
  extended<br /> by replacing the out of the box implementation with another.</p></div><div
class="section"><h2>Deployment<a name="Deployment"></a></h2><p>Simply
by writing a topology deployment descriptor to the topologies directory of the Knox installation,
a<br /> new Hadoop cluster definition is processed, the policy enforcement providers
are configured and the application<br /> context path is made available for use by API
consumers.</p></div>
       </div>
     </div>
     <div class="clear">

Modified: knox/trunk/src/site/markdown/index.md
URL: http://svn.apache.org/viewvc/knox/trunk/src/site/markdown/index.md?rev=1586861&r1=1586860&r2=1586861&view=diff
==============================================================================
--- knox/trunk/src/site/markdown/index.md (original)
+++ knox/trunk/src/site/markdown/index.md Sat Apr 12 15:09:53 2014
@@ -60,8 +60,14 @@ application context path. This allows th
 the REST API consumer with a single endpoint for access to all of the Hadoop services required,
across the<br/>
 multiple clusters.
 
-Authentication
-------------
+## Supported Hadoop Services
+WebHDFS (HDFS)
+Templeton (HCatalog)
+Stargate (HBase)
+Oozie 
+Hive/JDBC
+
+## Authentication
 Providers with the role of authentication are responsible for collecting credentials presented
by the API<br/>
 consumer, validating them and communicating the successful or failed authentication to the
client or the<br/>
 rest of the provider chain.
@@ -70,16 +76,14 @@ Out of the box, the Knox Gateway provide
 the Apache Shiro project for authenticating BASIC credentials against an LDAP user store.
There is support for<br/>
 OpenLDAP, ApacheDS and Microsoft Active Directory.
 
-Federation/SSO
-------------
+## Federation/SSO
 For customers that require credentials to be presented to a limited set of trusted entities
within the enterprise,<br/>
 the Knox Gateway may be configured to federate the authenticated identity from an external
authentication event.<br/>
 This is done through providers with the role of federation. The out of the box federation
provider is a simple<br/>
 mechanism for propagating the identity through HTTP Headers that specify the username and
group for the authenticated<br/>
 user. This has been built with vendor usecases such as SiteMinder and IBM Tivoli Access Manager.
 
-Authorization
-------------
+## Authorization
 The authorization role is used by providers that make access decisions for the requested
resources based on the<br/>
 effective user identity context. This identity context is determined by the authentication
provider and the identity<br/>
 assertion provider mapping rules. Evaluation of the identity context's user and group principals
against a set of<br/>
@@ -90,8 +94,13 @@ Out of the box, the Knox Gateway provide
 of username, groups and ip addresses. These ACLs are bound to and protect resources at the
service level.<br/>
 That is, they protect access to the Hadoop services themselves based on user, group and remote
ip address.
 
-Audit
-------------
+## Audit
 The ability to determine what actions were taken by whom during some period of time is provided
by the auditing<br/>
 capabilities of the Knox Gateway. The facility is built on an extension of the Log4j framework
and may be extended<br/>
-by replacing the out of the box implementation with another.
\ No newline at end of file
+by replacing the out of the box implementation with another.
+
+## Deployment
+Simply by writing a topology deployment descriptor to the topologies directory of the Knox
installation, a<br/>
+new Hadoop cluster definition is processed, the policy enforcement providers are configured
and the application<br/>
+context path is made available for use by API consumers.
+



Mime
View raw message