logging-log4j-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Remko Popma <remko.po...@gmail.com>
Subject Re: Fast and Furious
Date Wed, 02 Mar 2016 22:52:33 GMT
Robin, two questions:
First, what is the problem you're trying to solve?
Second, I'm having trouble understanding how your ideas fit in the log4j
design. Do you intend to feed info *into* log4j (something like custom
messages) or process info *coming out* of log4j (like a custom appender)?
Or both? (But if both, why use log4j?)

On Thursday, 3 March 2016, Robin Coe <rcoe.javadev@gmail.com> wrote:

> Idea is a lightweight service that starts TCP listeners that consume
> streams and parses them according to a layout, e.g., syslog, pipe, regex,
> etc.  The configuration is via yaml, whereby a listener is coupled to a
> codec.  The codec is the input stream layout coupled to the output stream
> log4j2 route.
>
> Simplest use case is taking a stream, say stdout (12factor architecture),
> and coupling that to a log4j2 route.  Other possibilities are to consume
> json and create log event data structures from the document.  By extension,
> any UTF8 stream could be parsed with regex, fields of interest extracted
> and injecged into a log event, and passed to log4j2 to route.  The log4j2
> routes I have set up use a failover strategy, whereby upstream sources that
> go offline cause log events to be written to a local file in json format.
> On service recovery, I dispatch a worker that rematerializes those events
> and sends them upstream.  Recovery begins with a rollover to a file which
> uses naming by convention, using the route, the worker parses the file and
> sends the events up.  This gives me best-effort eventual consistency of log
> events in one or more upstream sinks.  Obviously, this implies a client
> agent.
>
> My original architecture was based on Kafka for
> consistency/durability/performance but I couldn't guarantee delivery of
> events from the emitter to the sink.  When I saw that log4j2 had failover,
> I came up with this solution.  I just had to build the healthcheck service
> and recovery worker.  My plan was to follow the log4j2 plugin architecture
> and use annotations to declare log event handlers, allowing extension to
> the log processor.  That part's not done.
>
>
> ‚Äč
>

Mime
View raw message