logging-log4j-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Anhad Singh Bhasin <anhadbha...@gmail.com>
Subject Best practice for logging in a highly available hadoop cluster
Date Wed, 27 Sep 2017 23:44:20 GMT
Hello,

We have TCPSocketServer running on Edge node of a cluster and all other
data nodes send log events to the TCPSocketServer running on edge node. And
we are using standard routing to redirect log events to individual log
files.

We are planning to make our system highly available by adding multiple Edge
nodes. This means each Edge node would have its own TCPSocketServer and at
a particular time one Edge node would be up and running.

Since each Edge node would have its own set of log files, Is there a best
practice for high available systems from Log4j2 to keep all the log files
in one place.

Can we push the log events into log files in HDFS through the log4j2
Routing appender?
Or Do we push all the log events into log files in a shared disk among all
the edge nodes?

Any suggestions, comments would be deeply appreciated.

Thanks
Anhad Singh Bhasin

Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message