ranger-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Don Bosco Durai (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (RANGER-1310) Ranger Audit framework enhancement to provide an option to allow audit records to be spooled to local disk first before sending it to destinations
Date Thu, 26 Jan 2017 02:39:27 GMT

    [ https://issues.apache.org/jira/browse/RANGER-1310?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15839074#comment-15839074
] 

Don Bosco Durai commented on RANGER-1310:
-----------------------------------------

[~rmani], sorry for the delay. I feel, you are going in the right direction. Few points:

1) We will have One FileQueue which will store the audit in File first using a FileSpooler.
This FileQueue will be synchronous and will be replacing the AsyncBatchQueue. Only One FileQueue
will be there for all the destinations.
Bosco: To get close to 100% reliability, you will need multiple FileQueue. One to replace
AysncQueue at ingress and one at each egress /destination. So all BatchQueue will need to
be replaced by FileQueue. So each destination will have their own file backing. This is required,
because end to end is not transactional, so have ensure that at each point you are sort of
committing before going forward. These are typical 2 phase commit issues when you have more
than one systems.

4) Flow rate in this case would be same across destination ( Based on the time period in FileQueue
to close and open a audit file ). E.g. Solr will get data every 5 minutes if the file rollover
time is 5 minutes. HDFS will also get the the data in the same rate and flushed to hdfs cache.
Bosco: I guess, you can't rely on HDFS flush, because that is not guaranteed to write to HDFS
file incase of Name Server restart. So I believe, to guarantee write, you will have to explicitly
close the HDFS file and open a new one at each interval.

Other than that, I think, FileQueue implementation is a good feature and it will give another
option to the users to mix and match with existing queues.

Thanks



> Ranger Audit framework enhancement to provide an option to  allow audit records to be
spooled to local disk first before sending it to destinations
> ---------------------------------------------------------------------------------------------------------------------------------------------------
>
>                 Key: RANGER-1310
>                 URL: https://issues.apache.org/jira/browse/RANGER-1310
>             Project: Ranger
>          Issue Type: Bug
>            Reporter: Ramesh Mani
>            Assignee: Ramesh Mani
>
> Ranger Audit framework enhancement to provide an option to allow audit records to be
spooled to local disk first before sending it to destinations. 
> xasecure.audit.provider.filecache.is.enabled = true ==>  This will enable this functionality
of AuditFileCacheProivder to log the audits locally in a file.
> xasecure.audit.provider.filecache.filespool.file.rollover.sec = \{rollover time - default
is 1 day\} ==> this provides time to send the audit records from local to destination and
flush the pipe. 
> xasecure.audit.provider.filecache.filespool.dir=/var/log/hadoop/hdfs/audit/spool ==>
provides the directory where the Audit FileSpool cache is present.
> This helps in avoiding missing / partial audit records in the hdfs destination which
may happen randomly due to restart of respective plugin components. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Mime
View raw message