Quote:Original post by swiftcoder
It is possibel to implement a logging server in a separate process, and have that server buffer the logs as necessary. Communicate via local IPC or sockets.
This approach should give one all the advantages of out-of-thread logging, plus logs survive program crashes, at the cost of some implementation complexity.
If you do not need a guarantee that each log is written, then there is no problem.
Consider a radiation therapy device which needs to be audited that not only is each log written, but also replicated and stored in a way that cannot be tampered with, plus each log must be kept for a minimum of 10 years in a physically separate location with paper trail, while there might be dozens of logs generated each second in a real-time system.
In more common case however, the following two scenarios are important:
- How bad is it if random swaths of logs get lost or malformed?
- How bad is it if last n logs are not written
Second option is probably much worse, since it will make diagnosing crashes impossible - the only time you really need each log and as many as possible is in the last 5 functions calls that led to crash.
Sockets, out-of-thread writes and even async writes all lead to second case. Even transactional file systems can be a problem, if process fails in the middle of transaction which might be writing a considerable chunk of data.
Simply put, for reliable logging, flush each log to disk after each write, or use a database that implements this. Locking is highly problematic here, since the important logs might be pending write blocked by a lock, while an INFO is being written.
Ugh... performance is the very last issue one needs to deal with... Just keep one log file per thread, and merge them externally. Very few systems actually put any effort in logging, they do well enough just with fwrite(). Performance these days is usually solve by adding more cheap machines.