It seems that a common aim when first starting out in unit testing is to obtain 100% code coverage with our unit tests. This single metric is the defining goal and once obtained a new piece of functionality is targeted. After all if you have 100% code coverage you can’t get better than that, can you?It’s probably fair to say that it’s taken me several years and a few failed attempts at test driven development (TDD) to finally understand why when production code fails it can still occur in code that is “100%” covered by tests! At it’s most fundamental level this insight comes from realising that “100% code coverage” is not the aim of well tested code, but a by-product!Consider a basic object “ExamResult” that is constructed with a single percentage value. The object has a read only property returning the percentage and a read only bool value indicating a pass/fail status. The code for this basic object is shown below:namespace CodeCoverageExample { using System;
publicclassExamResult …
publicclassExamResult …