ranger-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Ramesh Mani (JIRA)" <j...@apache.org>
Subject [jira] [Updated] (ARGUS-5) Ability to write audit log in HDFS
Date Thu, 11 Sep 2014 22:34:33 GMT

     [ https://issues.apache.org/jira/browse/ARGUS-5?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]

Ramesh Mani updated ARGUS-5:
----------------------------
    Description: 
-	HdfsFileAppender is log4J appender  used to log into hdfs the logs.
-	Following are configuration parameters.

o	# HDFS appender
o	#
o	hdfs.xaaudit.logger=INFO,console,HDFSLOG
o	log4j.logger.xaaudit=${hdfs.xaaudit.logger}
o	log4j.additivity.xaaudit=false
o	log4j.appender.HDFSLOG=com.xasecure.authorization.hadoop.log.HdfsFileAppender
o	log4j.appender.HDFSLOG.File=/grid/0/var/log/hadoop/hdfs/argus_audit.log
o	log4j.appender.HDFSLOG.HdfsDestination=hdfs://ec2-54-88-128-112.compute-1.amazonaws.com:8020:/audit/hdfs/%hostname%/argus_audit.log
o	log4j.appender.HDFSLOG.layout=org.apache.log4j.PatternLayout
o	log4j.appender.HDFSLOG.layout.ConversionPattern=%d{ISO8601} %p %c{2}: %m%n %X{LogPath}
o	#****** HdfsFileRollingInterval -> Hdfs file Rollover time e.g 1min, 5min,... 1hr, 2hrs,..
1day, 2days... 1week, 2weeks.. 1month, 2months...
o	log4j.appender.HDFSLOG.HdfsFileRollingInterval=3min
o	#****** LocalFileRollingInterval -> Hdfs file Rollover time e.g 1min, 5min,... 1hr, 2hrs,..
1day, 2days... 1week, 2weeks.. 1month, 2months..
o	log4j.appender.HDFSLOG.FileRollingInterval=1min
o	log4j.appender.HDFSLOG.HdfsLiveUpdate=true
o	log4j.appender.HDFSLOG.HdfsCheckInterval=2min

1)	HdfsFileAppender will log into given HDFSDestination Path.
2)	Incase of unavailability of configured hdfs, a Local file in the given log4j parameter
FILE will be created with extension. cache.
3)	This local .cache file will be rolled over based on the FileRollingInterval parameter.
4)	 Once when the hdfs is available and ready, logging will be done in the HDFSDestination
provided.
5)	Local .cache file will be moved into HDFSDestination.
6)	Log File created in the hdfs destination will be rolled over based on the HdfsFileRollingInterval
parameter
7)	Parameter HdfsLiveUpdate = True mean when ever the hdfs is available appender will send
the logs to hdfsfile. If False Local .cache file will be created and these files will be moved
periodically into HDFSDestination
8)	Parameter HdfsCheckInterval is the interval to check for the availability of HDFS after
the first failure. It that time local .cache file will hold the logs.

  Argus Audit Logging into HDFS:

	.  For Audit logs “Policy Manager” should exclude the hdfs file Path from auditing to
avoid recursive call that is there when logging the audit.
	.  Configure log4j parameter in the xasecure-audit.xml. Make it Async.
	  (Note that each agent will have its own xasecure-aduit.xml ) properties.
	. For Auditing Hdfs Agent, have the appender part of NameNode and SecondaryNamenode.
         . For Auditing Hbase Agent , have the appender part of  Master and RegionServer.
	. For Auditing Hive Agent have it part of the HiverServer2

          Regular Logging Usage:

		For regular functionality of enabling the logging ,  do the same way other Log4J appenders
are configured.
	


> Ability to write audit log in HDFS
> ----------------------------------
>
>                 Key: ARGUS-5
>                 URL: https://issues.apache.org/jira/browse/ARGUS-5
>             Project: Argus
>          Issue Type: New Feature
>            Reporter: Selvamohan Neethiraj
>            Assignee: Ramesh Mani
>
> -	HdfsFileAppender is log4J appender  used to log into hdfs the logs.
> -	Following are configuration parameters.
> o	# HDFS appender
> o	#
> o	hdfs.xaaudit.logger=INFO,console,HDFSLOG
> o	log4j.logger.xaaudit=${hdfs.xaaudit.logger}
> o	log4j.additivity.xaaudit=false
> o	log4j.appender.HDFSLOG=com.xasecure.authorization.hadoop.log.HdfsFileAppender
> o	log4j.appender.HDFSLOG.File=/grid/0/var/log/hadoop/hdfs/argus_audit.log
> o	log4j.appender.HDFSLOG.HdfsDestination=hdfs://ec2-54-88-128-112.compute-1.amazonaws.com:8020:/audit/hdfs/%hostname%/argus_audit.log
> o	log4j.appender.HDFSLOG.layout=org.apache.log4j.PatternLayout
> o	log4j.appender.HDFSLOG.layout.ConversionPattern=%d{ISO8601} %p %c{2}: %m%n %X{LogPath}
> o	#****** HdfsFileRollingInterval -> Hdfs file Rollover time e.g 1min, 5min,... 1hr,
2hrs,.. 1day, 2days... 1week, 2weeks.. 1month, 2months...
> o	log4j.appender.HDFSLOG.HdfsFileRollingInterval=3min
> o	#****** LocalFileRollingInterval -> Hdfs file Rollover time e.g 1min, 5min,... 1hr,
2hrs,.. 1day, 2days... 1week, 2weeks.. 1month, 2months..
> o	log4j.appender.HDFSLOG.FileRollingInterval=1min
> o	log4j.appender.HDFSLOG.HdfsLiveUpdate=true
> o	log4j.appender.HDFSLOG.HdfsCheckInterval=2min
> 1)	HdfsFileAppender will log into given HDFSDestination Path.
> 2)	Incase of unavailability of configured hdfs, a Local file in the given log4j parameter
FILE will be created with extension. cache.
> 3)	This local .cache file will be rolled over based on the FileRollingInterval parameter.
> 4)	 Once when the hdfs is available and ready, logging will be done in the HDFSDestination
provided.
> 5)	Local .cache file will be moved into HDFSDestination.
> 6)	Log File created in the hdfs destination will be rolled over based on the HdfsFileRollingInterval
parameter
> 7)	Parameter HdfsLiveUpdate = True mean when ever the hdfs is available appender will
send the logs to hdfsfile. If False Local .cache file will be created and these files will
be moved periodically into HDFSDestination
> 8)	Parameter HdfsCheckInterval is the interval to check for the availability of HDFS
after the first failure. It that time local .cache file will hold the logs.
>   Argus Audit Logging into HDFS:
> 	.  For Audit logs “Policy Manager” should exclude the hdfs file Path from auditing
to avoid recursive call that is there when logging the audit.
> 	.  Configure log4j parameter in the xasecure-audit.xml. Make it Async.
> 	  (Note that each agent will have its own xasecure-aduit.xml ) properties.
> 	. For Auditing Hdfs Agent, have the appender part of NameNode and SecondaryNamenode.
>          . For Auditing Hbase Agent , have the appender part of  Master and RegionServer.
> 	. For Auditing Hive Agent have it part of the HiverServer2
>           Regular Logging Usage:
> 		For regular functionality of enabling the logging ,  do the same way other Log4J appenders
are configured.
> 	



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Mime
View raw message