There are many different kinds of bugs that we encounter in software development:
- missing functionality (doesn't do what's required)
- unanticipated/undesired behavior (it did what??)
- failure to check boundaries (or boundary conditions [e.g null])
- user-interface issues (usability)
- miscalculations (and logic errors)
- control flow errors (failing to break out of a loop, etc.)
- failure to handle errors (unhandled exceptions)
- security issues (Spoofing, Tampering, Repudiation, Information Disclosure, Denial of Service, Elevation of privilege)
- data interpretation errors (what date is 11/12/13 anyway?)
- interoperability errors (sending/receiving incorrect messages or data)
and most likely lots of others. Each of these areas has bugs that are all examples of opportunities to write a test that were missed along the development path.
Here is a simple real-world example. Today my Log file writing routine threw an unhandled exception when the file it was trying to write to was in use by another application (this is failure to handle errors, above). This caused the app to crash with an unhandled exception. In truth, I had written the code TDD, so it had tests, but none of them held the file open while it attempted to log data.
This was an unanticipated condition, but probably one that should have been pretty obvious too. However, I was trying to keep the code as simple as possible, and it's not a multi-threaded app, so there wouldn't have been *internal* collisions anyway. Regardless, the app crashing because its log file is being edited is probably not the desired behavior. So (being the test driven developer) I wrote this test first, to illustrate the bug:[Test]
public void TestLogFileInUse()
using (FileStream stream = File.OpenWrite(Utilities.LogFileName))
it's really simple, but it definitely did fail when the Log() method threw an exception because his file was being held open by the test. Only after I had a failing test, did I then modify the code to catch the IOException and handle it without crashing the application. I could also check for an event log entry if the file write fails, if that was the desired behavior.
The test is fast and simple, and it now guarantees not only that I fixed the bug by making the test pass [not crashing, the only requirement here] but also that the issue will never come back again.
This may be an almost trivial case, but it does illustrate how to use TDD when fixing bugs. I shouldn't really ever be adding new functionality to the mainline code without a failing test of some kind. Refactoring doesn't count, because we only refactor when all the tests are passing. Refactoring doesn't add any new functionality, it is just reorganizing the existing code to make it better.
The worst thing about fixing bugs is when they pop back up later... We hate regressions, because then we have to re-do re-done work YET AGAIN... Lean says that defects are waste, and if they come back after being fixed, then that makes the time spent on fixing them the second time even more wasteful.
This method of "find a bug >> write a test" is a Test-Driven approach to solving issues and making sure they never come back. Let's make regressions an artifact of history, and never fix a bug without having a failing test first!