Software Logging Schemes? 225
MySkippy writes "I've been a software engineer for just over 10 years, and I've seen a lot of different styles of logging in the applications I've worked on. Some were extremely verbose — about 1 logging line for every 2 lines of code. Others were very lacking, with maybe 1 line in 200 devoted to logging. I personally find that writing debug and informational messages about every 2 to 5 lines works well for debugging an issue, but can become cumbersome when reading through a log for analysis. I like to write warning messages when thresholds or limits are being approached — these tend to be infrequent. I log errors whenever I catch one (but I've never put a 'fatal' message in my code, because if it's truly a fatal error I probably didn't catch it). Recently I came across log4j and log4net and have begun using them both. That brings me to my question: how do the coders on Slashdot handle logging in their code?"
Jakarta Commons Logging (Score:1, Informative)
I use Jakarta Commons Logging...
What do you want to achieve... (Score:5, Informative)
As usual, "it depends" on what you are trying to achieve. Nobody can give you a blanket recommendation. But I guess in general: the log files need to give you enough information, that can't be got in other ways, to solve any problem that comes up.
We have a realtime product that goes all over the world and talks to hardware that we can't always get access to ourselves. Therefore, we sometimes must debug our code remotely. Obviously, logging is critical to this. We keep sometimes hundreds of MB of logs and have archiving rules and a tool for users to collect them. Every layer of the system keeps it's own logs, and all logs have timestamps to milliseconds.
In our case we log all the data back and forth, and then every important decision the code makes. For example if it decides there is something wrong with incoming data, it must log that. Any action it decides to take must be logged. Any data that will be passed on to other layers/the outside world must be shown. Generally, whenever we forget to log some of this data we will later regret it ("why the hell is it ignoring that device state..."). We also log at startup, basically the whole system configuration so that we can reproduce it.
Callstacks when there is an exception can be very useful. However, a lot of "errors" (at least in our case) are not exceptions but rather unexpected data or behaviour. We rarely have a crash and in state-based systems a callstack doesn't tell you much about what's going on. So a callstack is not useful for all situations.
Other times, you just want logging to give you a clue where in the code it was so you can run up the debugger and step through it (you do know how to step through code in the debugger, right?). In that case, too much logging can just get in the way. It might be sufficient in a GUI or web app to say which screen/page and which button was clicked.
You'd hope users could report this kind of details, but not always: if the user is working in another language, in another country, with two layers of helpdesk between you and them, and they are busy doing other things when the problem occurs and only call in the issue an hour later, and the helpdesk takes a week to report it to you - you may just get a vague or even misleading report that no-one can remember when you ask questions. In those cases log files may be all you have to go on.
There is also a tradeoff between log detail and manageability. Besides the difficulty in reading long log files, having a lot of detail means maintaining a lot of extra code. It also means that log files can become unmanagably large. In our case those hundreds of MB of logs can be a huge problem for customers to send to us because they have low quality internet connections (small companies in Mexico for example).
Standard format for domain information (Score:5, Informative)
If your system has objects of type A B and C which can be handled by different components of your system then your should make the logging system in those components print information about those objects in exactly the same way.
While you are at it, make the log format easily parsable by software. You don't want to be looking for a needle in a gigabyte size haystack of trace information without help from a tool which understands what it is looking for.
Logging to a database (Score:4, Informative)
My online applications log to a database, not a text file. Multiple applications on different machines can log to the same database table. There's no need for "log rotation"; old entries can be summarized and purged by date on the live database. With appropriate indexed fields, you can find key log entries in huge log files very rapidly.
Even program faults are logged to the database. If the program crashes, the top-level exception handler catches the event, does a traceback, opens a fresh database connection, and logs the traceback.
Logging, parts I and II (Score:2, Informative)
I wrote up a two-part piece on logging best and worst practices a while back. See Part I [distancesoftware.com] and Part II [distancesoftware.com] if you are interested.
Re:What do you want to achieve... (Score:5, Informative)
+1 for parent.
If you want good logging, then define requirements for it, just as you would for any other feature of the program. You also need to define the audience for the log. The comments thread has focused on debug logging for developers (Linus "no debuggers" Torvalds would be proud) but there are a number of reasons why the users who are stuck^h^h^h blessed with your software might want logging. For instance:
- audit trails (often required by organizational security requirements or regulatory requirements)
- accounting/billing (you'd be amazed at the odd ways people come up with to bill for things)
- health monitoring (the admin might not want to watch your program 24x7 to see if it is running; they might want to program automation to be alerted when it is not working properly)
- troubleshooting (believe it or not, your software might actually break when running in the wild)
Anyway, think about your use cases, and then think about what to instrument for each use case, and what to put in the events.
For instance, if you want to make your daemon monitorable for health, then think about all its dependencies. Does it read config from a file? The file is a dependency. What happens if a value is invalid? Does it fail or use a default? If it fails, reading the value is a dependency. Need a network socket? Dependency. Connection to remote machine? Dependency (actually multiple- name resolution, network connectivity, authentication, app-level connectivity, etc.). After you've enumerated all your dependencies, then add instrumentation in your code to log events when the dependency is unsatisfied (==unhealthy/broken), and when it is satisfied (==healthy). Make sure to log BOTH states, so that the monitoring app can decide which state you are in. Make sure to log only once per state transition. In each event, try to put as much information about the situation as you can- why you are in the state ("the value foo from daemon-config was invalid"), status codes, etc.- give your user a fighting chance of being able to use your log to diagnose and resolve the issue.
If you want to instrument for audit, then I suggest reading the Orange Book or the Common Criteria documents for suggestions on what needs to be audited and what information to put in the events.
For accounting, examine the RADIUS RFCs.
Hope this helps.
Re:not cpu bound... disk bound (Score:5, Informative)
For high availability clustered web applications, it's not disc IO that is the problem, but network overhead.
I tend to use log4j and asynchronous logging that passes log messages to a syslog server that can handle the file io - and it ends up being network overhead that is the killer.
Whatever is useful while programming. (Score:3, Informative)
I tend to do my debugging by inserting a lot of printf statements to indicate where in the program I currently am and the value of any critical variables at that time. As the output information is no longer needed (i.e. I fixed the bug it was attached to), I tend to cull out whatever isn't useful anymore. However, I tend to keep starting messages in function calls related to a routine I'm working on or making more than a trivial change to... since chances are, knowing me, I'm going to end up putting them back in there anyway once I create a new bug... and lets face it, it WILL happen.
Once I'm done, I go back, remove or comment out (usually just comment out) all the messages that have no redeeming value for a properly functioning program, and turn the rest into debug statements which print based on the debug level provided at execution time... or sometimes I use a mask to select which types of messages to display.
-Restil
Lotsa logging (Score:4, Informative)
I write code for Telecom test systems that need to run 24x7 processing highly varying requests from dozens of different client systems. Our system consists of dozens of different processes/components per host, with multiple hosts all invoking components on each other as needed (all via CORBA). There are very many paths that any request can take through our system.
In this environment we log VERY heavily since each request is close to unique and we need to be able to determine the path it took through the system, and why it did, and what happened, in the event of any bug report. Some of the haviest used modules can produce close to 1GB of log per day per host - upto a couple hundred lines of logging information per request per process that it passes thru. We use a custom printf-like log library written in C++ (that auto rotates the log files based on various criteria), a custom tail utility for dealing with the large log files (tail a log file from a given timestamp - done instantly via binary search on the timestamps) and a daily cron job to compress the older log files and move any older than 5 days off the production servers to someplace with more storage.
Re:As little as practically possible (Score:2, Informative)
Re:Embedded debugging (Score:2, Informative)
Engineers not taking the time to write logs that are useful. "Got to here", "Value=1", etc. A few of us write enum-to-string functions and pass them to the logging system for cleaner output.
An Engineer does something that stupid?!!!
Who told them they were engineers? The HR Dork?
operational errors (Score:3, Informative)
1. Distinguish between serious -operational- issues and other issues. The sysadmin doesn't need to know that you had a pointer problem; there's nothing he can do about it. He does, however, want to know that a message was received and the appropriate action taken. Or that a connection was attempted but failed.
2. Be grep freindly. The first log entry related to a particular activity should have an ID of some sort in the log message which is then included in every additional log entry associated with that activity.
Re:As little as practically possible (Score:3, Informative)
And let's hope it never does disappear or we can kiss new software goodbye.
Some of that behavior I've seen, we'd be better off if it disappeared. Sometimes, simply expanding till limits are met is like the old saying "some day, computers will be the size of buildings, and have millions of vacuum tubes!"
Perhaps I'm just pessimistic because where I work they're in the process of replacing the huge, unwieldy FileMaker monstrosity that handles payroll.... with an even bigger, more unwieldy FileMaker monstrosity that does the same thing, but requires us to feed it more information (more detailed job codes, time tracked in 6 minute increments, etc). All this extra data will be used to generate reports explaining why we're always releasing late. Never mind that we've been badgering them to hire more people for months. No, it's clearly a time management issue! As the PHB in Dilbert once said, "I want hourly reports on why we're behind schedule!"
Re:not cpu bound... disk bound (Score:4, Informative)
ehhh. HP 373i quad port gig nic pci express kicks ass and only runs about $350