[Bundy-hackers] Centralized logging process (Fw: [bundy] lettuce test failures due to corrupted log output (#4))
神明達哉 jinmei at wide.ad.jpMon May 5 19:46:38 CEST 2014
- Previous message: [Bundy-hackers] Centralized logging process (Fw: [bundy] lettuce test failures due to corrupted log output (#4))
- Next message: [Bundy-hackers] Proposal for realistic review (Re: Bundy source code is now on github)
- Messages sorted by: [ date ] [ thread ] [ subject ] [ author ]
At Sun, 4 May 2014 22:52:56 +0200, Shane Kerr <shane at time-travellers.org> wrote: > In retrospect I realize that I failed to do something which was > important from the very beginning of the project... TEST ASSUMPTIONS. > > It should be possible to benchmark logging directly using log4cplus and > compare to a centralized logging server. This could reveal the actual > performance costs (if any) to such a model. > > My guess is that logging through a centralized logging server is not > much slower than logging directly from each process. Given that we > could remove file locking primitives, perhaps a centralized logger > could even be faster. Unless it is a LOT faster, we'd probably want to > still avoid centralized logging when possible because of the single > point of failure. > > However, since we seem to constantly have problems with STDIO/STDERR, > perhaps it would make sense to centralize logging only for those > streams. If it's only for stdio and stderr, I'm not sure if it's worth introducing an additional component; these are generally exceptional cases like debugging purposes, right? And, as I proposed for issue #4, we could use workaround (logging to a file and then do 'tail -f' or something) for such exceptional cases. On the other hand, if we are willing to accept communication overhead with a centralized logging server for general operation, we can use syslog. -- jinmei