knox-commits mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From dillido...@apache.org
Subject svn commit: r1542446 - /incubator/knox/trunk/books/0.3.0/book_troubleshooting.md
Date Sat, 16 Nov 2013 01:50:39 GMT
Author: dillidorai
Date: Sat Nov 16 01:50:39 2013
New Revision: 1542446

URL: http://svn.apache.org/r1542446
Log:
formatting changes

Modified:
    incubator/knox/trunk/books/0.3.0/book_troubleshooting.md

Modified: incubator/knox/trunk/books/0.3.0/book_troubleshooting.md
URL: http://svn.apache.org/viewvc/incubator/knox/trunk/books/0.3.0/book_troubleshooting.md?rev=1542446&r1=1542445&r2=1542446&view=diff
==============================================================================
--- incubator/knox/trunk/books/0.3.0/book_troubleshooting.md (original)
+++ incubator/knox/trunk/books/0.3.0/book_troubleshooting.md Sat Nov 16 01:50:39 2013
@@ -128,16 +128,20 @@ TODO:Kevin - What does it look like when
 
 If you see error like the following in your console  while submitting a Job using groovy
shell, it is likely that the authenticated user does not have a home directory on HDFS.
 
+<pre><code>
 Caught: org.apache.hadoop.gateway.shell.HadoopException: org.apache.hadoop.gateway.shell.ErrorResponse:
HTTP/1.1 403 Forbidden
 org.apache.hadoop.gateway.shell.HadoopException: org.apache.hadoop.gateway.shell.ErrorResponse:
HTTP/1.1 403 Forbidden
+</code></pre>
 
 You would also see this error if you try file operation on the home directory of the authenticating
user.
 
 The error would look a little different as shown below  if you are attempting to the operation
with cURL.
 
+<pre><code>
 {"RemoteException":{"exception":"AccessControlException","javaClassName":"org.apache.hadoop.security.AccessControlException","message":"Permission
denied: user=tom, access=WRITE, inode=\"/user\":hdfs:hdfs:drwxr-xr-x"}}* 
+</code></pre>
 
-Resolution
+#### Resolution
 
 Create the home directory for the user on HDFS.
 The home directory is typically of the form /user/<userid> and should be owened by
the user.
@@ -152,6 +156,7 @@ If the hadoop cluster is secured with Ke
 
 In either case if  the user does not have such OS account, his file permissions are based
on user ownership of files or "other" permisison in "ugo" posix permission. The user does
not get any file permission as a member of any group if you are using default hadoop.security.group.mapping.

 
+TODO: add sample error message from running test on secure cluster with missing OS account
 
 ### HBase Issues ###
 



Mime
View raw message