From dev-return-232-apmail-argus-dev-archive=argus.apache.org@argus.incubator.apache.org Thu Sep 11 22:48:56 2014 Return-Path: X-Original-To: apmail-argus-dev-archive@minotaur.apache.org Delivered-To: apmail-argus-dev-archive@minotaur.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id EB1DB1111B for ; Thu, 11 Sep 2014 22:48:55 +0000 (UTC) Received: (qmail 24649 invoked by uid 500); 11 Sep 2014 22:48:55 -0000 Delivered-To: apmail-argus-dev-archive@argus.apache.org Received: (qmail 24622 invoked by uid 500); 11 Sep 2014 22:48:55 -0000 Mailing-List: contact dev-help@argus.incubator.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: dev@argus.incubator.apache.org Delivered-To: mailing list dev@argus.incubator.apache.org Received: (qmail 24611 invoked by uid 99); 11 Sep 2014 22:48:55 -0000 Received: from athena.apache.org (HELO athena.apache.org) (140.211.11.136) by apache.org (qpsmtpd/0.29) with ESMTP; Thu, 11 Sep 2014 22:48:55 +0000 X-ASF-Spam-Status: No, hits=-2001.7 required=5.0 tests=ALL_TRUSTED,RP_MATCHES_RCVD X-Spam-Check-By: apache.org Received: from [140.211.11.3] (HELO mail.apache.org) (140.211.11.3) by apache.org (qpsmtpd/0.29) with SMTP; Thu, 11 Sep 2014 22:48:54 +0000 Received: (qmail 24421 invoked by uid 99); 11 Sep 2014 22:48:33 -0000 Received: from arcas.apache.org (HELO arcas.apache.org) (140.211.11.28) by apache.org (qpsmtpd/0.29) with ESMTP; Thu, 11 Sep 2014 22:48:33 +0000 Date: Thu, 11 Sep 2014 22:48:33 +0000 (UTC) From: "Ramesh Mani (JIRA)" To: dev@argus.incubator.apache.org Message-ID: In-Reply-To: References: Subject: [jira] [Updated] (ARGUS-5) Ability to write audit log in HDFS MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: quoted-printable X-JIRA-FingerPrint: 30527f35849b9dde25b450d4833f0394 X-Virus-Checked: Checked by ClamAV on apache.org [ https://issues.apache.org/jira/browse/ARGUS-5?page=3Dcom.atlassian.j= ira.plugin.system.issuetabpanels:all-tabpanel ] Ramesh Mani updated ARGUS-5: ---------------------------- Description:=20 {panel:title=3DAbility to write Logs into HDFS} HdfsFileAppender is log4J appender used to log into hdfs the logs. Following are configuration parameters. o=09# HDFS appender o=09# o=09hdfs.xaaudit.logger=3DINFO,console,HDFSLOG o=09log4j.logger.xaaudit=3D$\x{hdfs.xaaudit.logger} o=09log4j.additivity.xaaudit=3Dfalse o=09log4j.appender.HDFSLOG=3Dcom.xasecure.authorization.hadoop.log.HdfsFile= Appender o=09log4j.appender.HDFSLOG.File=3D/grid/0/var/log/hadoop/hdfs/argus_audit.l= og o=09log4j.appender.HDFSLOG.HdfsDestination=3Dhdfs://ec2-54-88-128-112.compu= te-1.amazonaws.com:8020:/audit/hdfs/%hostname%/argus_audit.log o=09log4j.appender.HDFSLOG.layout=3Dorg.apache.log4j.PatternLayout o=09log4j.appender.HDFSLOG.layout.ConversionPattern=3D%d\x{ISO8601} %p %c\x= {2}: %m%n %X\x{LogPath} o=09#****** HdfsFileRollingInterval -> Hdfs file Rollover time e.g 1min, 5m= in,... 1hr, 2hrs,.. 1day, 2days... 1week, 2weeks.. 1month, 2months... o=09log4j.appender.HDFSLOG.HdfsFileRollingInterval=3D3min o=09#****** LocalFileRollingInterval -> Hdfs file Rollover time e.g 1min, 5= min,... 1hr, 2hrs,.. 1day, 2days... 1week, 2weeks.. 1month, 2months.. o=09log4j.appender.HDFSLOG.FileRollingInterval=3D1min o=09log4j.appender.HDFSLOG.HdfsLiveUpdate=3Dtrue o=09log4j.appender.HDFSLOG.HdfsCheckInterval=3D2min 1)=09HdfsFileAppender will log into given HDFSDestination Path. 2)=09Incase of unavailability of configured hdfs, a Local file in the given= log4j parameter FILE will be created with extension. cache. 3)=09This local .cache file will be rolled over based on the FileRollingInt= erval parameter. 4)=09 Once when the hdfs is available and ready, logging will be done in th= e HDFSDestination provided. 5)=09Local .cache file will be moved into HDFSDestination. 6)=09Log File created in the hdfs destination will be rolled over based on = the HdfsFileRollingInterval parameter 7)=09Parameter HdfsLiveUpdate =3D True mean when ever the hdfs is available= appender will send the logs to hdfsfile. If False Local .cache file will b= e created and these files will be moved periodically into HDFSDestination 8)=09Parameter HdfsCheckInterval is the interval to check for the availabil= ity of HDFS after the first failure. It that time local .cache file will ho= ld the logs. Argus Audit Logging into HDFS: =09. For Audit logs =E2=80=9CPolicy Manager=E2=80=9D should exclude the hd= fs file Path from auditing to avoid recursive call that is there when loggi= ng the audit. =09. Configure log4j parameter in the xasecure-audit.xml. Make it Async. =09 (Note that each agent will have its own xasecure-aduit.xml ) propertie= s. =09. For Auditing Hdfs Agent, have the appender part of NameNode and Second= aryNamenode. . For Auditing Hbase Agent , have the appender part of Master and= RegionServer. =09. For Auditing Hive Agent have it part of the HiverServer2 Regular Logging Usage: =09=09For regular functionality of enabling the logging , do the same way = other Log4J appenders are configured. {panel} was: {panel:title=3DAbility to write Logs into HDFS} HdfsFileAppender is log4J appender used to log into hdfs the logs. Following are configuration parameters. o=09# HDFS appender o=09# o=09hdfs.xaaudit.logger=3DINFO,console,HDFSLOG o=09log4j.logger.xaaudit=3D$\X{hdfs.xaaudit.logger} o=09log4j.additivity.xaaudit=3Dfalse o=09log4j.appender.HDFSLOG=3Dcom.xasecure.authorization.hadoop.log.HdfsFile= Appender o=09log4j.appender.HDFSLOG.File=3D/grid/0/var/log/hadoop/hdfs/argus_audit.l= og o=09log4j.appender.HDFSLOG.HdfsDestination=3Dhdfs://ec2-54-88-128-112.compu= te-1.amazonaws.com:8020:/audit/hdfs/%hostname%/argus_audit.log o=09log4j.appender.HDFSLOG.layout=3Dorg.apache.log4j.PatternLayout o=09log4j.appender.HDFSLOG.layout.ConversionPattern=3D%d\X{ISO8601} %p %c\X= {2}: %m%n %X\X{LogPath} o=09#****** HdfsFileRollingInterval -> Hdfs file Rollover time e.g 1min, 5m= in,... 1hr, 2hrs,.. 1day, 2days... 1week, 2weeks.. 1month, 2months... o=09log4j.appender.HDFSLOG.HdfsFileRollingInterval=3D3min o=09#****** LocalFileRollingInterval -> Hdfs file Rollover time e.g 1min, 5= min,... 1hr, 2hrs,.. 1day, 2days... 1week, 2weeks.. 1month, 2months.. o=09log4j.appender.HDFSLOG.FileRollingInterval=3D1min o=09log4j.appender.HDFSLOG.HdfsLiveUpdate=3Dtrue o=09log4j.appender.HDFSLOG.HdfsCheckInterval=3D2min 1)=09HdfsFileAppender will log into given HDFSDestination Path. 2)=09Incase of unavailability of configured hdfs, a Local file in the given= log4j parameter FILE will be created with extension. cache. 3)=09This local .cache file will be rolled over based on the FileRollingInt= erval parameter. 4)=09 Once when the hdfs is available and ready, logging will be done in th= e HDFSDestination provided. 5)=09Local .cache file will be moved into HDFSDestination. 6)=09Log File created in the hdfs destination will be rolled over based on = the HdfsFileRollingInterval parameter 7)=09Parameter HdfsLiveUpdate =3D True mean when ever the hdfs is available= appender will send the logs to hdfsfile. If False Local .cache file will b= e created and these files will be moved periodically into HDFSDestination 8)=09Parameter HdfsCheckInterval is the interval to check for the availabil= ity of HDFS after the first failure. It that time local .cache file will ho= ld the logs. Argus Audit Logging into HDFS: =09. For Audit logs =E2=80=9CPolicy Manager=E2=80=9D should exclude the hd= fs file Path from auditing to avoid recursive call that is there when loggi= ng the audit. =09. Configure log4j parameter in the xasecure-audit.xml. Make it Async. =09 (Note that each agent will have its own xasecure-aduit.xml ) propertie= s. =09. For Auditing Hdfs Agent, have the appender part of NameNode and Second= aryNamenode. . For Auditing Hbase Agent , have the appender part of Master and= RegionServer. =09. For Auditing Hive Agent have it part of the HiverServer2 Regular Logging Usage: =09=09For regular functionality of enabling the logging , do the same way = other Log4J appenders are configured. {panel} > Ability to write audit log in HDFS > ---------------------------------- > > Key: ARGUS-5 > URL: https://issues.apache.org/jira/browse/ARGUS-5 > Project: Argus > Issue Type: New Feature > Reporter: Selvamohan Neethiraj > Assignee: Ramesh Mani > > {panel:title=3DAbility to write Logs into HDFS} > HdfsFileAppender is log4J appender used to log into hdfs the logs. > Following are configuration parameters. > o=09# HDFS appender > o=09# > o=09hdfs.xaaudit.logger=3DINFO,console,HDFSLOG > o=09log4j.logger.xaaudit=3D$\x{hdfs.xaaudit.logger} > o=09log4j.additivity.xaaudit=3Dfalse > o=09log4j.appender.HDFSLOG=3Dcom.xasecure.authorization.hadoop.log.HdfsFi= leAppender > o=09log4j.appender.HDFSLOG.File=3D/grid/0/var/log/hadoop/hdfs/argus_audit= .log > o=09log4j.appender.HDFSLOG.HdfsDestination=3Dhdfs://ec2-54-88-128-112.com= pute-1.amazonaws.com:8020:/audit/hdfs/%hostname%/argus_audit.log > o=09log4j.appender.HDFSLOG.layout=3Dorg.apache.log4j.PatternLayout > o=09log4j.appender.HDFSLOG.layout.ConversionPattern=3D%d\x{ISO8601} %p %c= \x{2}: %m%n %X\x{LogPath} > o=09#****** HdfsFileRollingInterval -> Hdfs file Rollover time e.g 1min, = 5min,... 1hr, 2hrs,.. 1day, 2days... 1week, 2weeks.. 1month, 2months... > o=09log4j.appender.HDFSLOG.HdfsFileRollingInterval=3D3min > o=09#****** LocalFileRollingInterval -> Hdfs file Rollover time e.g 1min,= 5min,... 1hr, 2hrs,.. 1day, 2days... 1week, 2weeks.. 1month, 2months.. > o=09log4j.appender.HDFSLOG.FileRollingInterval=3D1min > o=09log4j.appender.HDFSLOG.HdfsLiveUpdate=3Dtrue > o=09log4j.appender.HDFSLOG.HdfsCheckInterval=3D2min > 1)=09HdfsFileAppender will log into given HDFSDestination Path. > 2)=09Incase of unavailability of configured hdfs, a Local file in the giv= en log4j parameter FILE will be created with extension. cache. > 3)=09This local .cache file will be rolled over based on the FileRollingI= nterval parameter. > 4)=09 Once when the hdfs is available and ready, logging will be done in = the HDFSDestination provided. > 5)=09Local .cache file will be moved into HDFSDestination. > 6)=09Log File created in the hdfs destination will be rolled over based o= n the HdfsFileRollingInterval parameter > 7)=09Parameter HdfsLiveUpdate =3D True mean when ever the hdfs is availab= le appender will send the logs to hdfsfile. If False Local .cache file will= be created and these files will be moved periodically into HDFSDestination > 8)=09Parameter HdfsCheckInterval is the interval to check for the availab= ility of HDFS after the first failure. It that time local .cache file will = hold the logs. > Argus Audit Logging into HDFS: > =09. For Audit logs =E2=80=9CPolicy Manager=E2=80=9D should exclude the = hdfs file Path from auditing to avoid recursive call that is there when log= ging the audit. > =09. Configure log4j parameter in the xasecure-audit.xml. Make it Async. > =09 (Note that each agent will have its own xasecure-aduit.xml ) propert= ies. > =09. For Auditing Hdfs Agent, have the appender part of NameNode and Seco= ndaryNamenode. > . For Auditing Hbase Agent , have the appender part of Master a= nd RegionServer. > =09. For Auditing Hive Agent have it part of the HiverServer2 > Regular Logging Usage: > =09=09For regular functionality of enabling the logging , do the same wa= y other Log4J appenders are configured. > {panel} -- This message was sent by Atlassian JIRA (v6.3.4#6332)