Covering Code Coverage

What happens when you run a code coverage tool against your test suite and you see some trivial methods, such as getters and setters in java, are not being covered?  The novice developer will probably add a simple test to make sure the getters and setters do get and set the value they are, well, getting and setting.  The intermediate developer will claim those tests are pointless and just leave the gap, and maybe write a blog post about how stupid code coverage is.  The advanced developer will simply delete those methods, they were not needed anyway.
Code coverage is extremely valuable, just not for the reasons people often give for it.  It should not be used to increase confidence in your testing, because it does nothing of the sort.  Just because something is being called by your tests does not mean it is being tested correctly, let alone tested thoroughly with respect to all the ways it can interact with the reset of the system.  If you really want to test your tests, I don’t know of a good way to accurately gauge the quality of tests, other than how many defects it lets through.  Unfortunately that ends up being somewhat retroactive as you have to have released the software in some form to determine that (hence why agilists so often push releasing incrementally).
Code coverage is also a poor tool for reporting to management how the team is doing in terms of code quality.  Any sort of quantification of quality is rife with potential errors, but especially one that is so easy to game.  Grading employees by how much code coverage they do only encourages writing dumb tests and verbose code.  It may be valuable to watch out for obvious gaps (if one component comes up with a code coverage of 10%, it may be appropriate to ask some questions about if it is getting tested at all), but it would be very dangerous to rely on them for anything more than that.
So what is code coverage good for?  Assuming you have confidence in your tests (and confidence in the developers writing the code), code coverage because an effective way of finding dead code.  After all, if the tests are comprehensively covering all needed functions and scenarios, something that is never called by them is probably not needed and just adding bloat to your codebase.  If you do indeed need it for some obscure scenario, then your confidence in your tests is clearly misplaced as obscure scenarios are frequently where bugs show up.  But even then, remember that it is not important that you have a test that calls those particular lines of code, it is important that you have a tests which covers that scenario.  In other words, if those getters and setters are needed as part of dealing with bad data input into your system, don’t just write tests that get and set that value.  Write a tests that input bad data to your system and makes sure it deals with it correctly.
And in the end, never forget what a wise man once said about tests:  “Testing shows the presence, not the absence of bugs.”  Code coverage is just another form of testing.  A gap can indicate a problem, but high coverage doesn’t tell you anything.

Posted in: Programming by Nicholas Watkins Brown 1 Comment