knox-commits mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From zbla...@apache.org
Subject svn commit: r1697940 - in /knox/trunk/books/0.7.0: ./ dev-guide/
Date Wed, 26 Aug 2015 14:15:27 GMT
Author: zblanco
Date: Wed Aug 26 14:15:27 2015
New Revision: 1697940

URL: http://svn.apache.org/r1697940
Log:
Fix numerous typos throughout the user-guide

Modified:
    knox/trunk/books/0.7.0/book_client-details.md
    knox/trunk/books/0.7.0/book_gateway-details.md
    knox/trunk/books/0.7.0/book_knox-samples.md
    knox/trunk/books/0.7.0/book_troubleshooting.md
    knox/trunk/books/0.7.0/config.md
    knox/trunk/books/0.7.0/config_advanced_ldap.md
    knox/trunk/books/0.7.0/config_authn.md
    knox/trunk/books/0.7.0/config_id_assertion.md
    knox/trunk/books/0.7.0/config_ldap_group_lookup.md
    knox/trunk/books/0.7.0/dev-guide/book.md
    knox/trunk/books/0.7.0/knox_cli.md
    knox/trunk/books/0.7.0/quick_start.md
    knox/trunk/books/0.7.0/service_hbase.md
    knox/trunk/books/0.7.0/service_hive.md
    knox/trunk/books/0.7.0/service_oozie.md
    knox/trunk/books/0.7.0/service_webhdfs.md

Modified: knox/trunk/books/0.7.0/book_client-details.md
URL: http://svn.apache.org/viewvc/knox/trunk/books/0.7.0/book_client-details.md?rev=1697940&r1=1697939&r2=1697940&view=diff
==============================================================================
--- knox/trunk/books/0.7.0/book_client-details.md (original)
+++ knox/trunk/books/0.7.0/book_client-details.md Wed Aug 26 14:15:27 2015
@@ -31,7 +31,7 @@ The list below outlines the general requ
 * Support the notion of a SSO session for multiple Hadoop interactions
 * Support the multiple authentication and federation token capabilities of the Apache Knox
Gateway
 * Promote the use of REST APIs as the dominant remote client mechanism for Hadoop services
-* Promote the the sense of Hadoop as a single unified product
+* Promote the sense of Hadoop as a single unified product
 * Aligned with the Apache Knox Gateway's overall goals for security
 
 The result is a very simple DSL ([Domain Specific Language](http://en.wikipedia.org/wiki/Domain-specific_language))
of sorts that is used via [Groovy](http://groovy.codehaus.org) scripts.

Modified: knox/trunk/books/0.7.0/book_gateway-details.md
URL: http://svn.apache.org/viewvc/knox/trunk/books/0.7.0/book_gateway-details.md?rev=1697940&r1=1697939&r2=1697940&view=diff
==============================================================================
--- knox/trunk/books/0.7.0/book_gateway-details.md (original)
+++ knox/trunk/books/0.7.0/book_gateway-details.md Wed Aug 26 14:15:27 2015
@@ -21,7 +21,7 @@ This section describes the details of th
 
 * How URLs are mapped between a gateway that services multiple Hadoop clusters and the clusters
themselves
 * How the gateway is configured through gateway-site.xml and cluster specific topology files
-* How to configure the various policy enfocement provider features such as authentication,
authorization, auditing, hostmapping, etc.
+* How to configure the various policy enforcement provider features such as authentication,
authorization, auditing, hostmapping, etc.
 
 ### URL Mapping ###
 

Modified: knox/trunk/books/0.7.0/book_knox-samples.md
URL: http://svn.apache.org/viewvc/knox/trunk/books/0.7.0/book_knox-samples.md?rev=1697940&r1=1697939&r2=1697940&view=diff
==============================================================================
--- knox/trunk/books/0.7.0/book_knox-samples.md (original)
+++ knox/trunk/books/0.7.0/book_knox-samples.md Wed Aug 26 14:15:27 2015
@@ -59,7 +59,7 @@ The Knox samples can however be made to
 
 For manually installed Knox instances, there is really no way for the installer to know how
to configure the topology file for you.
 
-Essentially, these steps are identical to the Amabari deployed instance except that #3 should
be replaced with the configuration of the ootb sandbox.xml to point the configuration at the
proper hosts and ports.
+Essentially, these steps are identical to the Ambari deployed instance except that #3 should
be replaced with the configuration of the ootb sandbox.xml to point the configuration at the
proper hosts and ports.
 
 1. You need to have ssh access to the environment in order for the localhost assumption within
the samples to be valid.
 2. The Knox Demo LDAP Server is started - you can start it from Ambari

Modified: knox/trunk/books/0.7.0/book_troubleshooting.md
URL: http://svn.apache.org/viewvc/knox/trunk/books/0.7.0/book_troubleshooting.md?rev=1697940&r1=1697939&r2=1697940&view=diff
==============================================================================
--- knox/trunk/books/0.7.0/book_troubleshooting.md (original)
+++ knox/trunk/books/0.7.0/book_troubleshooting.md Wed Aug 26 14:15:27 2015
@@ -85,7 +85,7 @@ If the gateway cannot contact one of the
     	at org.apache.http.impl.client.AbstractHttpClient.execute(AbstractHttpClient.java:784)
     	at org.apache.hadoop.gateway.dispatch.HttpClientDispatch.executeRequest(HttpClientDispatch.java:99)
 
-The the resulting behavior on the client will differ by client.
+The resulting behavior on the client will differ by client.
 For the client DSL executing the {GATEWAY_HOME}/samples/ExampleWebHdfsLs.groovy the output
will look look like this.
 
     Caught: org.apache.hadoop.gateway.shell.HadoopException: org.apache.hadoop.gateway.shell.ErrorResponse:
HTTP/1.1 500 Server Error
@@ -120,7 +120,7 @@ When Knox is configured to accept reques
 	the following error is returned
 	curl: (52) Empty reply from server
 
-This is the default behavior for Jetty SSL listener. While the credentials to the default
authentication provider continue to be username and password, we do not want to encourage
sending these in clear text. Since prememptively sending BASIC credentials is a common pattern
with REST APIs it would be unwise to redirect to a HTTPS listener thus allowing clear text
passwords.
+This is the default behavior for Jetty SSL listener. While the credentials to the default
authentication provider continue to be username and password, we do not want to encourage
sending these in clear text. Since preemptively sending BASIC credentials is a common pattern
with REST APIs it would be unwise to redirect to a HTTPS listener thus allowing clear text
passwords.
 
 To resolve this issue, we have two options:
 
@@ -167,7 +167,7 @@ The client will likely see something alo
 
 #### Using ldapsearch to verify ldap connectivtiy and credentials
 
-If your authentication to knox fails and you believe your are using correct creedentilas,
you could try to verify the connectivity and credentials usong ldapsearch, assuming you are
using ldap directory for authentication.
+If your authentication to knox fails and you believe your are using correct creedentilas,
you could try to verify the connectivity and credentials using ldapsearch, assuming you are
using ldap directory for authentication.
 
 Assuming you are using the default values that came out of box with knox, your ldapsearch
command would be like the following
 
@@ -274,7 +274,7 @@ If the client hangs after emitting the l
     Status : {...}
     Creating table 'test_table'...
 
-HBase and Starget can be restred using the following commands on the Hadoop Sandbox VM.
+HBase and Stargate can be restred using the following commands on the Hadoop Sandbox VM.
 You will need to ssh into the VM in order to run these commands.
 
     sudo -u hbase /usr/lib/hbase/bin/hbase-daemon.sh stop master
@@ -285,7 +285,7 @@ You will need to ssh into the VM in orde
 ### SSL Certificate Issues ###
 
 Clients that do not trust the certificate presented by the server will behave in different
ways.
-A browser will typically warn you of the inability to trust the receieved certificate and
give you an opportunity to add an exception for the particular certificate.
+A browser will typically warn you of the inability to trust the received certificate and
give you an opportunity to add an exception for the particular certificate.
 Curl will present you with the follow message and instructions for turning of certificate
verification:
 
     curl performs SSL certificate verification by default, using a "bundle" 

Modified: knox/trunk/books/0.7.0/config.md
URL: http://svn.apache.org/viewvc/knox/trunk/books/0.7.0/config.md?rev=1697940&r1=1697939&r2=1697940&view=diff
==============================================================================
--- knox/trunk/books/0.7.0/config.md (original)
+++ knox/trunk/books/0.7.0/config.md Wed Aug 26 14:15:27 2015
@@ -151,7 +151,7 @@ The general outline of a provider elemen
 
 /topology/gateway/provider/role
 : Defines the role of a particular provider.
-There are a number of pre-defined roles used by out-of-the-box provider plugins for the gateay.
+There are a number of pre-defined roles used by out-of-the-box provider plugins for the gateway.
 These roles are: authentication, identity-assertion, authentication, rewrite and hostmap
 
 /topology/gateway/provider/name
@@ -217,7 +217,7 @@ The basic structure is shown below.
         ...
     </topology>
 
-This mapping is required because the Hadoop servies running within the cluster are unaware
that they are being accessed from outside the cluster.
+This mapping is required because the Hadoop services running within the cluster are unaware
that they are being accessed from outside the cluster.
 Therefore URLs returned as part of REST API responses will typically contain internal host
names.
 Since clients outside the cluster will be unable to resolve those host name they must be
mapped to external host names.
 
@@ -337,7 +337,7 @@ Do not assume that the encryption if suf
 
 A specific user should be created to run the gateway this user will be the only user with
permissions for the persisted master file.
 
-See the Knox CLI section for descriptions of the command line utilties related to the master
secret.
+See the Knox CLI section for descriptions of the command line utilities related to the master
secret.
 
 #### Management of Security Artifacts ####
 
@@ -374,7 +374,7 @@ By leveraging the algorithm described ab
 2. Using an NFS mount as a central location for the artifacts would provide a single source
of truth without the need to replicate them over the network. Of course, NFS mounts have their
own challenges.
 3. Using the KnoxCLI to create and manage the security artifacts.
 
-See the Knox CLI section for descriptions of the command line utilties related to the security
artifact management.
+See the Knox CLI section for descriptions of the command line utilities related to the security
artifact management.
 
 #### Keystores ####
 In order to provide your own certificate for use by the gateway, you will need to either
import an existing key pair into a Java keystore or generate a self-signed cert using the
Java keytool.
@@ -412,13 +412,13 @@ The following will allow you to provisio
     keytool -genkey -keyalg RSA -alias gateway-identity -keystore gateway.jks \
         -storepass {master-secret} -validity 360 -keysize 2048
 
-Keytool will prompt you for a number of elements used will comprise the distiniguished name
(DN) within your certificate. 
+Keytool will prompt you for a number of elements used will comprise the distinguished name
(DN) within your certificate. 
 
 *NOTE:* When it prompts you for your First and Last name be sure to type in the hostname
of the machine that your gateway instance will be running on. This is used by clients during
hostname verification to ensure that the presented certificate matches the hostname that was
used in the URL for the connection - so they need to match.
 
 *NOTE:* When it prompts for the key password just press enter to ensure that it is the same
as the keystore password. Which, as was described earlier, must match the master secret for
the gateway instance. Alternatively, you can set it to another passphrase - take note of it
and set the gateway-identity-passphrase alias to that passphrase using the Knox CLI.
 
-See the Knox CLI section for descriptions of the command line utilties related to the management
of the keystores.
+See the Knox CLI section for descriptions of the command line utilities related to the management
of the keystores.
 
 ##### Using a CA Signed Key Pair #####
 For certain deployments a certificate key pair that is signed by a trusted certificate authority
is required. There are a number of different ways in which these certificates are acquired
and can be converted and imported into the Apache Knox keystore.
@@ -458,7 +458,7 @@ The credential stores in Knox use the JC
 
 Keytool may be used to create credential stores but the Knox CLI section details how to create
aliases. These aliases are managed within credential stores which are created by the CLI as
needed. The simplest approach is to create the gateway-identity-passpharse alias with the
Knox CLI. This will create the credential store if it doesn't already exist and add the key
passphrase.
 
-See the Knox CLI section for descriptions of the command line utilties related to the management
of the credential stores.
+See the Knox CLI section for descriptions of the command line utilities related to the management
of the credential stores.
 
 ##### Provisioning of Keystores #####
 Once you have created these keystores you must move them into place for the gateway to discover
them and use them to represent its identity for SSL connections. This is done by copying the
keystores to the `{GATEWAY_HOME}/data/security/keystores` directory for your gateway install.

Modified: knox/trunk/books/0.7.0/config_advanced_ldap.md
URL: http://svn.apache.org/viewvc/knox/trunk/books/0.7.0/config_advanced_ldap.md?rev=1697940&r1=1697939&r2=1697940&view=diff
==============================================================================
--- knox/trunk/books/0.7.0/config_advanced_ldap.md (original)
+++ knox/trunk/books/0.7.0/config_advanced_ldap.md Wed Aug 26 14:15:27 2015
@@ -161,7 +161,7 @@ The configuration that you would use cou
 	<!-- search base used to search for user bind DN.
 	     Defaults to the value of main.ldapRealm.searchBase. 
 	     If main.ldapRealm.userSearchAttributeName is defined, 
-	     vlaue for main.ldapRealm.searchBase  or main.ldapRealm.userSearchBase 
+	     value for main.ldapRealm.searchBase  or main.ldapRealm.userSearchBase 
 	     should be defined -->
 	<param>
 		<name>main.ldapRealm.userSearchBase</name>
@@ -171,7 +171,7 @@ The configuration that you would use cou
 	<!-- search base used to search for groups.
 	     Defaults to the value of main.ldapRealm.searchBase.
 		   If value of main.ldapRealm.authorizationEnabled is true,
-	     vlaue for main.ldapRealm.searchBase  or main.ldapRealm.groupSearchBase should be defined
-->
+	     value for main.ldapRealm.searchBase  or main.ldapRealm.groupSearchBase should be defined
-->
 	<param>
 		<name>main.ldapRealm.groupSearchBase</name>
 		<value>dc=hadoop,dc=apache,dc=org</value>
@@ -179,7 +179,7 @@ The configuration that you would use cou
 
 	<!-- optional, default value: groupOfNames
 	     Objectclass to identify group entries in ldap, used to build search 
-       filter to search for group entires --> 
+       filter to search for group entries --> 
 	<param>
 		<name>main.ldapRealm.groupObjectClass</name>
 		<value>groupOfNames</value>
@@ -222,7 +222,7 @@ The configuration that you would use cou
 
 The value for this could have one of the following 2 formats
 
-plantextpassword
+plaintextpassword
 ${ALIAS=ldcSystemPassword}
 
 The first format specifies the password in plain text in the provider configuration.

Modified: knox/trunk/books/0.7.0/config_authn.md
URL: http://svn.apache.org/viewvc/knox/trunk/books/0.7.0/config_authn.md?rev=1697940&r1=1697939&r2=1697940&view=diff
==============================================================================
--- knox/trunk/books/0.7.0/config_authn.md (original)
+++ knox/trunk/books/0.7.0/config_authn.md Wed Aug 26 14:15:27 2015
@@ -100,7 +100,7 @@ This section discusses the LDAP configur
 
 **main.ldapRealm.userDnTemplate** - in order to bind a simple username to an LDAP server
that generally requires a full distinguished name (DN), we must provide the template into
which the simple username will be inserted. This template allows for the creation of a DN
by injecting the simple username into the common name (CN) portion of the DN. **This element
will need to be customized to reflect your deployment environment.** The template provided
in the sample is only an example and is valid only within the LDAP schema distributed with
Knox and is represented by the users.ldif file in the {GATEWAY_HOME}/conf directory.
 
-**main.ldapRealm.contextFactory.url** - this element is the URL that represents the host
and port of LDAP server. It also includes the scheme of the protocol to use. This may be either
ldap or ldaps depending on whether you are communicating with the LDAP over SSL (higly recommended).
**This element will need to be cusomized to reflect your deployment environment.**.
+**main.ldapRealm.contextFactory.url** - this element is the URL that represents the host
and port of LDAP server. It also includes the scheme of the protocol to use. This may be either
ldap or ldaps depending on whether you are communicating with the LDAP over SSL (highly recommended).
**This element will need to be customized to reflect your deployment environment.**.
 
 **main.ldapRealm.contextFactory.authenticationMechanism** - this element indicates the type
of authentication that should be performed against the LDAP server. The current default value
is `simple` which indicates a simple bind operation. This element should not need to be modified
and no mechanism other than a simple bind has been tested for this particular release.
 
@@ -110,7 +110,7 @@ This section discusses the LDAP configur
 
 You would use LDAP configuration as documented above to authenticate against Active Directory
as well.
 
-Some Active Directory specifc things to keep in mind:
+Some Active Directory specific things to keep in mind:
 
 Typical AD main.ldapRealm.userDnTemplate value looks slightly different, such as
     cn={0},cn=users,DC=lab,DC=sample,dc=com
@@ -125,7 +125,7 @@ If your AD is configured to authenticate
 In order to communicate with your LDAP server over SSL (again, highly recommended), you will
need to modify the topology file in a couple ways and possibly provision some keying material.
 
 1. **main.ldapRealm.contextFactory.url** must be changed to have the `ldaps` protocol scheme
and the port must be the SSL listener port on your LDAP server.
-2. Identity certificate (keypair) provisioned to LDAP server - your LDAP server specific
documentation should indicate what is requried for providing a cert or keypair to represent
the LDAP server identity to connecting clients.
+2. Identity certificate (keypair) provisioned to LDAP server - your LDAP server specific
documentation should indicate what is required for providing a cert or keypair to represent
the LDAP server identity to connecting clients.
 3. Trusting the LDAP Server's public key - if the LDAP Server's identity certificate is issued
by a well known and trusted certificate authority and is already represented in the JRE's
cacerts truststore then you don't need to do anything for trusting the LDAP server's cert.
If, however, the cert is selfsigned or issued by an untrusted authority you will need to either
add it to the cacerts keystore or to another truststore that you may direct Knox to utilize
through a system property.
 
 #### Session Configuration ####

Modified: knox/trunk/books/0.7.0/config_id_assertion.md
URL: http://svn.apache.org/viewvc/knox/trunk/books/0.7.0/config_id_assertion.md?rev=1697940&r1=1697939&r2=1697940&view=diff
==============================================================================
--- knox/trunk/books/0.7.0/config_id_assertion.md (original)
+++ knox/trunk/books/0.7.0/config_id_assertion.md Wed Aug 26 14:15:27 2015
@@ -33,7 +33,7 @@ The following configuration is required
         <enabled>true</enabled>
     </provider>
 
-This particular configuration indicates that the Default identity assertion provider is enabled
and that there are no principal mapping rules to apply to identities flowing from the authentication
in the gateway to the backend Hadoop cluster services. The primary principal of the current
subject will therefore be asserted via a query paramter or as a form parameter - ie. ?user.name={primaryPrincipal}
+This particular configuration indicates that the Default identity assertion provider is enabled
and that there are no principal mapping rules to apply to identities flowing from the authentication
in the gateway to the backend Hadoop cluster services. The primary principal of the current
subject will therefore be asserted via a query parameter or as a form parameter - ie. ?user.name={primaryPrincipal}
 
     <provider>
         <role>identity-assertion</role>

Modified: knox/trunk/books/0.7.0/config_ldap_group_lookup.md
URL: http://svn.apache.org/viewvc/knox/trunk/books/0.7.0/config_ldap_group_lookup.md?rev=1697940&r1=1697939&r2=1697940&view=diff
==============================================================================
--- knox/trunk/books/0.7.0/config_ldap_group_lookup.md (original)
+++ knox/trunk/books/0.7.0/config_ldap_group_lookup.md Wed Aug 26 14:15:27 2015
@@ -33,7 +33,7 @@ Please see below a sample Shiro configur
             <!-- 
             session timeout in minutes,  this is really idle timeout,
             defaults to 30mins, if the property value is not defined,, 
-            current client authentication would expire if client idles contiuosly for more
than this value
+            current client authentication would expire if client idles continuosly for more
than this value
             -->
             <!-- defaults to: 30 minutes
             <param>
@@ -168,12 +168,12 @@ java -jar bin/ldap.jar conf
 java -Dsandbox.ldcSystemPassword=guest-password -jar bin/gateway.jar -persist-master
 
 Following call to WebHDFS should report HTTP/1.1 401 Unauthorized
-As guest is not a member of group "analyst", authorization prvoider states user should be
member of group "analyst"
+As guest is not a member of group "analyst", authorization provider states user should be
member of group "analyst"
 
 curl  -i -v  -k -u guest:guest-password  -X GET https://localhost:8443/gateway/sandbox/webhdfs/v1?op=GETHOMEDIRECTORY
 
 Following call to WebHDFS should report: {"Path":"/user/sam"}
-As sam is a member of group "analyst", authorization prvoider states user should be member
of group "analyst"
+As sam is a member of group "analyst", authorization provider states user should be member
of group "analyst"
 
 curl  -i -v  -k -u sam:sam-password  -X GET https://localhost:8443/gateway/sandbox/webhdfs/v1?op=GETHOMEDIRECTORY
 
@@ -192,12 +192,12 @@ java -jar bin/ldap.jar conf
 java -Dsandbox.ldcSystemPassword=guest-password -jar bin/gateway.jar -persist-master
 
 Following call to WebHDFS should report HTTP/1.1 401 Unauthorized
-As guest is not a member of group "analyst", authorization prvoider states user should be
member of group "analyst"
+As guest is not a member of group "analyst", authorization provider states user should be
member of group "analyst"
 
 curl  -i -v  -k -u guest:guest-password  -X GET https://localhost:8443/gateway/sandbox/webhdfs/v1?op=GETHOMEDIRECTORY
 
 Following call to WebHDFS should report: {"Path":"/user/sam"}
-As sam is a member of group "analyst", authorization prvoider states user should be member
of group "analyst"
+As sam is a member of group "analyst", authorization provxider states user should be member
of group "analyst"
 
 curl  -i -v  -k -u sam:sam-password  -X GET https://localhost:8443/gateway/sandbox/webhdfs/v1?op=GETHOMEDIRECTORY
 
@@ -214,15 +214,15 @@ cp templates/users.ldapdynamicgroups.ldi
 java -jar bin/ldap.jar conf
 java -Dsandbox.ldcSystemPassword=guest-password -jar bin/gateway.jar -persist-master
 
-Please note that user.ldapdynamicgroups.ldif also loads ncessary schema to create dynamic
groups in Apache DS.
+Please note that user.ldapdynamicgroups.ldif also loads necessary schema to create dynamic
groups in Apache DS.
 
 Following call to WebHDFS should report HTTP/1.1 401 Unauthorized
-As guest is not a member of dynamic group "directors", authorization prvoider states user
should be member of group "directors"
+As guest is not a member of dynamic group "directors", authorization provider states user
should be member of group "directors"
 
 curl  -i -v  -k -u guest:guest-password  -X GET https://localhost:8443/gateway/sandbox/webhdfs/v1?op=GETHOMEDIRECTORY
 
 Following call to WebHDFS should report: {"Path":"/user/bob"}
-As bob is a member of dynamic group "directors", authorization prvoider states user should
be member of group "directors"
+As bob is a member of dynamic group "directors", authorization provider states user should
be member of group "directors"
 
 curl  -i -v  -k -u sam:sam-password  -X GET https://localhost:8443/gateway/sandbox/webhdfs/v1?op=GETHOMEDIRECTORY
 

Modified: knox/trunk/books/0.7.0/dev-guide/book.md
URL: http://svn.apache.org/viewvc/knox/trunk/books/0.7.0/dev-guide/book.md?rev=1697940&r1=1697939&r2=1697940&view=diff
==============================================================================
--- knox/trunk/books/0.7.0/dev-guide/book.md (original)
+++ knox/trunk/books/0.7.0/dev-guide/book.md Wed Aug 26 14:15:27 2015
@@ -502,7 +502,7 @@ public void testDevGuideSample() throws
 There are a number of extension points available in the gateway: services, providers, rewrite
steps and functions, etc.
 All of these use the Java ServiceLoader mechanism for their discovery.
 There are two ways to make these extensions available on the class path at runtime.
-The first way to to add a new module to the project and have the extension "built-in".
+The first way to add a new module to the project and have the extension "built-in".
 The second is to add the extension to the class path of the server after it is installed.
 Both mechanism are described in more detail below.
 

Modified: knox/trunk/books/0.7.0/knox_cli.md
URL: http://svn.apache.org/viewvc/knox/trunk/books/0.7.0/knox_cli.md?rev=1697940&r1=1697939&r2=1697940&view=diff
==============================================================================
--- knox/trunk/books/0.7.0/knox_cli.md (original)
+++ knox/trunk/books/0.7.0/knox_cli.md Wed Aug 26 14:15:27 2015
@@ -26,7 +26,7 @@ The knoxcli.sh script is located in the
 ##### `bin/knoxcli.sh [--help]` #####
 prints help for all commands
 
-#### Knox Verison Info ####
+#### Knox Version Info ####
 ##### `bin/knoxcli.sh version [--help]` #####
 Displays Knox version information.
 

Modified: knox/trunk/books/0.7.0/quick_start.md
URL: http://svn.apache.org/viewvc/knox/trunk/books/0.7.0/quick_start.md?rev=1697940&r1=1697939&r2=1697940&view=diff
==============================================================================
--- knox/trunk/books/0.7.0/quick_start.md (original)
+++ knox/trunk/books/0.7.0/quick_start.md Wed Aug 26 14:15:27 2015
@@ -195,7 +195,7 @@ To validate that see the sections for th
         'https://localhost:8443/gateway/sandbox/webhdfs/v1/tmp/LICENSE?op=CREATE'
 
     curl -i -k -u guest:guest-password -T LICENSE -X PUT \
-        '{Value of Location header from response response above}'
+        '{Value of Location header from response   above}'
 
 #### Get a file in HDFS via Knox.
 

Modified: knox/trunk/books/0.7.0/service_hbase.md
URL: http://svn.apache.org/viewvc/knox/trunk/books/0.7.0/service_hbase.md?rev=1697940&r1=1697939&r2=1697940&view=diff
==============================================================================
--- knox/trunk/books/0.7.0/service_hbase.md (original)
+++ knox/trunk/books/0.7.0/service_hbase.md Wed Aug 26 14:15:27 2015
@@ -29,7 +29,7 @@ See the HBase Stargate Setup section bel
 #### HBase Examples ####
 
 The examples below illustrate the set of basic operations with HBase instance using Stargate
REST API.
-Use following link to get more more details about HBase/Stargate API: http://wiki.apache.org/hadoop/Hbase/Stargate.
+Use following link to get more details about HBase/Stargate API: http://wiki.apache.org/hadoop/Hbase/Stargate.
 
 Note: Some HBase examples may not work due to enabled [Access Control](https://hbase.apache.org/book/hbase.accesscontrol.configuration.html).
User may not be granted for performing operations in samples. In order to check if Access
Control is configured in the HBase instance verify hbase-site.xml for a presence of `org.apache.hadoop.hbase.security.access.AccessController`
in `hbase.coprocessor.master.classes` and `hbase.coprocessor.region.classes` properties. 

 To grant the Read, Write, Create permissions to `guest` user execute the following command:
@@ -277,7 +277,7 @@ After launching the shell, execute the f
     * endTime(Long) - the upper bound for filtration by time.
     * times(Long startTime, Long endTime) - the lower and upper bounds for filtration by
time.
     * filter(String) - the filter XML definition.
-    * maxVersions(Integer) - the the maximum number of versions to return.
+    * maxVersions(Integer) - the maximum number of versions to return.
 * Response
     * scannerId : String - the scanner ID of the created scanner. Consumes body.
 * Example

Modified: knox/trunk/books/0.7.0/service_hive.md
URL: http://svn.apache.org/viewvc/knox/trunk/books/0.7.0/service_hive.md?rev=1697940&r1=1697939&r2=1697940&view=diff
==============================================================================
--- knox/trunk/books/0.7.0/service_hive.md (original)
+++ knox/trunk/books/0.7.0/service_hive.md Wed Aug 26 14:15:27 2015
@@ -63,7 +63,7 @@ By default the gateway is configured to
 
 #### Hive Examples ####
 
-This guide provides detailed examples for how to to some basic interactions with Hive via
the Apache Knox Gateway.
+This guide provides detailed examples for how to do some basic interactions with Hive via
the Apache Knox Gateway.
 
 ##### Hive Setup #####
 
@@ -231,7 +231,7 @@ Each line from the file below will need
     statement.close();
     connection.close();
 
-Exampes use 'log.txt' with content:
+Examples use 'log.txt' with content:
 
     2012-02-03 18:35:34 SampleClass6 [INFO] everything normal for id 577725851
     2012-02-03 18:35:34 SampleClass4 [FATAL] system problem at id 1991281254

Modified: knox/trunk/books/0.7.0/service_oozie.md
URL: http://svn.apache.org/viewvc/knox/trunk/books/0.7.0/service_oozie.md?rev=1697940&r1=1697939&r2=1697940&view=diff
==============================================================================
--- knox/trunk/books/0.7.0/service_oozie.md (original)
+++ knox/trunk/books/0.7.0/service_oozie.md Wed Aug 26 14:15:27 2015
@@ -21,7 +21,7 @@
 Oozie is a Hadoop component provides complex job workflows to be submitted and managed.
 Please refer to the latest [Oozie documentation](http://oozie.apache.org/docs/4.0.0/) for
details.
 
-In order to make Oozie accessible via the gateway there are several important Haddop configuration
settings.
+In order to make Oozie accessible via the gateway there are several important Hadoop configuration
settings.
 These all relate to the network endpoint exposed by various Hadoop services.
 
 The HTTP endpoint at which Oozie is running can be found via the oozie.base.url property
in the oozie-site.xml file.

Modified: knox/trunk/books/0.7.0/service_webhdfs.md
URL: http://svn.apache.org/viewvc/knox/trunk/books/0.7.0/service_webhdfs.md?rev=1697940&r1=1697939&r2=1697940&view=diff
==============================================================================
--- knox/trunk/books/0.7.0/service_webhdfs.md (original)
+++ knox/trunk/books/0.7.0/service_webhdfs.md Wed Aug 26 14:15:27 2015
@@ -215,7 +215,7 @@ Use can use cURL to directly invoke the
 ###### put() - Write a file into HDFS (CREATE)
 
 * Request
-    * text( String text ) - Text to upload to HDFS.  Takes precidence over file if both present.
+    * text( String text ) - Text to upload to HDFS.  Takes precedence over file if both present.
     * file( String name ) - The name of a local file to upload to HDFS.
     * to( String name ) - The fully qualified name to create in HDFS.
 * Response



Mime
View raw message