Skip to main content

Unit Test Code Coverage

It seems that a common aim when first starting out in unit testing is to obtain 100% code coverage with our unit tests.  This single metric is the defining goal and once obtained a new piece of functionality is targeted.  After all if you have 100% code coverage you can’t get better than that, can you?

It’s probably fair to say that it’s taken me several years and a few failed attempts at test driven development (TDD) to finally understand why when production code fails it can still occur in code that is “100%” covered by tests!  At it’s most fundamental level this insight comes from realising that “100% code coverage” is not the aim of well tested code, but a by-product!

Consider a basic object “ExamResult” that is constructed with a single percentage value.  The object has a read only property returning the percentage and a read only bool value indicating a pass/fail status.  The code for this basic object is shown below:

namespace CodeCoverageExample
{
using System;

public class ExamResult
{
private const decimal PASSMARK = 75;

public ExamResult(decimal score)
{
if (score < 0 || score > 100)
{
throw new ArgumentOutOfRangeException("score", score, "'Score' must be between 0 and 100 (inclusive)");
}

this.Score = score;
this.Passed = DetermineIfPassed(score);
}

public decimal Score { get; private set; }

public bool Passed { get; private set; }

private static bool DetermineIfPassed(decimal score)
{
return (score >= PASSMARK);
}
}
}

For the code above, the following tests would obtain the magic“100% code coverage” figure:

namespace CodeCoverageExample
{
using System;
using Microsoft.VisualStudio.TestTools.UnitTesting;
using NUnit.Framework;
using Assert = NUnit.Framework.Assert;

[TestClass]
public class PoorCodeCoverage
{
[TestMethod]
public void ExamResult_BadArgumentException()
{
var ex = Assert.Throws<ArgumentOutOfRangeException>(() => new ExamResult(-1));
Assert.That(ex.ParamName, Is.EqualTo("score"));
}

[TestMethod]
public void ExamResult_DeterminePassed()
{
Assert.IsTrue(new ExamResult(79).Passed);
}

[TestMethod]
public void ExamResult_DetermineFailed()
{
Assert.IsFalse(new ExamResult(0).Passed);
}
}
}

Note: The testing examples in these blog posts use both MSTest and NUnit.  By decorating the class with MSTest attributes you will get automated test running in TFS continuous integration “out of the box”.  Aliasing “ASSERT” to the NUnit version allows access to NUnit version of this command (which I was originally more familiar with and still prefer).

Running any code coverage tool will clearly show all the paths are being tested but are you really protected against modifications introducing unintended logic changes?  This can be checked by running through a few potential situations:  Changing the pass mark to 80% would cause the above unit tests to fail, but reducing it to 1% wouldn’t.  If you consider the main purpose of the unit test is to verify that the exam result is correctly determined (and the potential consequences in the “real world” if it is not) then it would imply that this sort of check is not fit for purpose.  In the scenario it is critical that edge cases are tested – these are the points in which a result passes from being a failure to a pass and similarly from being a valid result to invalid (you can’t score less than 0% or more than 100%).  Similarly short cuts should not be taken in asserting the state of the object under test in each individual test - don’t assume because the “Result” property was correctly set in one test it will be correct in another (and therefore not tested).  The following improved unit tests verifies the desired behaviour of the object in full and in the process of this verification covers 100% of the code.  It is this change in priority that is critical when designing and developing your unit tests.  Only when all the logic paths through your code are tested are your unit tests complete and at this point you should by default have 100% code coverage.

namespace CodeCoverageExample
{
using System;
using Microsoft.VisualStudio.TestTools.UnitTesting;
using NUnit.Framework;
using Assert = NUnit.Framework.Assert;

[TestClass]
class GoodCodeCoverage
{
private const decimal LOWEST_VALID_SCORE = 0;
private constdecimal HIGHEST_VALID_SCORE = 100;
private const decimal PASSMARK = 80;

[TestMethod]
public void ExamResult_BadArgumentException_UpperLimit()
{
var ex = Assert.Throws<ArgumentOutOfRangeException>(() => new ExamResult(HIGHEST_VALID_SCORE + 1));
Assert.That(ex.ParamName, Is.EqualTo("score"));
}

[TestMethod]
public void ExamResult_BadArgumentException_LowerLimit()
{
var ex = Assert.Throws<ArgumentOutOfRangeException>(() => new ExamResult(LOWEST_VALID_SCORE - 1));
Assert.That(ex.ParamName, Is.EqualTo("score"));
}

[TestMethod]
public void ExamResult_DeterminePassed_HigherLimit()
{
AssertCall(HIGHEST_VALID_SCORE, true);
}

[TestMethod]
public void ExamResult_DeterminePassed_LowerLimit()
{
AssertCall(PASSMARK, true);
}

[TestMethod]
public void ExamResult_DetermineFailed_HigherLimit()
{
AssertCall(PASSMARK - 1, false);
}

[TestMethod]
public void ExamResult_DetermineFailed_LowerLimit()
{
AssertCall(LOWEST_VALID_SCORE, false);
}

private void AssertCall(decimal score, bool result)
{
var examResult = new ExamResult(score);
Assert.That(examResult.Score, Is.EqualTo(score));
Assert.That(examResult.Passed, Is.EqualTo(result));
}

}
}

Additional Comment: Whilst working through these examples I considered exposing the “pass-mark” constant held in the “ExamResult” object so it could be used within our unit test.  In certain situations that could be acceptable (or even desirable).  However unless there is a requirement it is probably better to keep the two separate as this requires that the unit test explicitly defines the pass / fail point that it is testing.

Comments

Popular posts from this blog

Mocking HttpCookieCollection in HttpRequestBase

When unit testing ASP.NET MVC2 projects the issue of injecting HttpContext is quickly encountered.  There seem to be many different ways / recommendations for mocking HttpContextBase to improve the testability of controllers and their actions.  My investigations into that will probably be a separate blog post in the near future but for now I want to cover something that had me stuck for longer than it probably should have.  That is how to mock non abstract/interfaced classes within HttpRequestBase and HttpResponseBase – namely the HttpCookieCollection class.   The code sample below illustrates how it can be used within a mocked instance of HttpRequestBase.  Cookies can be added / modified within the unit test code prior to being passed into the code being tested.   After it’s been called, using a combination of MOQ’s Verify and NUnit’s Assert it is possible to check how many times the collection is accessed (but you have to include the set up calls) and that the relevant cookies have …

Injecting HttpContextBase into an MVC Controller

It is a shame that when the ASP.NET MVC framework was released they did not think to build IoC support into the infrastructure. All the major components of the MVC engine appear to magically inherit instances of HttpContext and it’s related objects – which can cause no end of problems if you are trying to utilise Unit Testing and IoC. Reading around various articles on the subject just to get around this one problem requires the implementation of several different concepts and you are still left with a work around. The code below, along with the other links referenced in this article is my stab at resolving the issue. There’s probably nothing new here, but it does attempt to relate all the information needed to do this for Castle Windsor. The overview is that all controllers will need to inherit from a base controller, which takes an instance of HttpContext into it’s constructor. It then overrides the property HttpContext in the main controller class, supplying it’s own version…

Unit Testing Workflow Code Activities - Part 1

When I first started looking into Windows Workflow one of the first things that I liked about it was how it separated responsibilities. The workflow was responsible for handling the procedural logic with all it's conditional statements, etc. Whilst individual code activities could be written to handle the business logic processing; created in small easily re-usable components. To try and realise my original perception this series of blog posts will cover the unit testing of bespoke code activities; broken down into: Part One: Unit testing a code activity with a (generic) cast return type (this post)Part Two: Unit testing a code activity that assigns it's (multiple) output to "OutArguments" (Not yet written)So to make a start consider the following really basic code activity; it expects an InArgument<string> of "Input" and returns a string containing the processed output; in this case a reverse copy of the value held in "Input".namespace Ex…