Skip to main content

Unit Test Code Coverage

It seems that a common aim when first starting out in unit testing is to obtain 100% code coverage with our unit tests.  This single metric is the defining goal and once obtained a new piece of functionality is targeted.  After all if you have 100% code coverage you can’t get better than that, can you?

It’s probably fair to say that it’s taken me several years and a few failed attempts at test driven development (TDD) to finally understand why when production code fails it can still occur in code that is “100%” covered by tests!  At it’s most fundamental level this insight comes from realising that “100% code coverage” is not the aim of well tested code, but a by-product!

Consider a basic object “ExamResult” that is constructed with a single percentage value.  The object has a read only property returning the percentage and a read only bool value indicating a pass/fail status.  The code for this basic object is shown below:

namespace CodeCoverageExample
{
using System;

public class ExamResult
{
private const decimal PASSMARK = 75;

public ExamResult(decimal score)
{
if (score < 0 || score > 100)
{
throw new ArgumentOutOfRangeException("score", score, "'Score' must be between 0 and 100 (inclusive)");
}

this.Score = score;
this.Passed = DetermineIfPassed(score);
}

public decimal Score { get; private set; }

public bool Passed { get; private set; }

private static bool DetermineIfPassed(decimal score)
{
return (score >= PASSMARK);
}
}
}

For the code above, the following tests would obtain the magic“100% code coverage” figure:

namespace CodeCoverageExample
{
using System;
using Microsoft.VisualStudio.TestTools.UnitTesting;
using NUnit.Framework;
using Assert = NUnit.Framework.Assert;

[TestClass]
public class PoorCodeCoverage
{
[TestMethod]
public void ExamResult_BadArgumentException()
{
var ex = Assert.Throws<ArgumentOutOfRangeException>(() => new ExamResult(-1));
Assert.That(ex.ParamName, Is.EqualTo("score"));
}

[TestMethod]
public void ExamResult_DeterminePassed()
{
Assert.IsTrue(new ExamResult(79).Passed);
}

[TestMethod]
public void ExamResult_DetermineFailed()
{
Assert.IsFalse(new ExamResult(0).Passed);
}
}
}

Note: The testing examples in these blog posts use both MSTest and NUnit.  By decorating the class with MSTest attributes you will get automated test running in TFS continuous integration “out of the box”.  Aliasing “ASSERT” to the NUnit version allows access to NUnit version of this command (which I was originally more familiar with and still prefer).

Running any code coverage tool will clearly show all the paths are being tested but are you really protected against modifications introducing unintended logic changes?  This can be checked by running through a few potential situations:  Changing the pass mark to 80% would cause the above unit tests to fail, but reducing it to 1% wouldn’t.  If you consider the main purpose of the unit test is to verify that the exam result is correctly determined (and the potential consequences in the “real world” if it is not) then it would imply that this sort of check is not fit for purpose.  In the scenario it is critical that edge cases are tested – these are the points in which a result passes from being a failure to a pass and similarly from being a valid result to invalid (you can’t score less than 0% or more than 100%).  Similarly short cuts should not be taken in asserting the state of the object under test in each individual test - don’t assume because the “Result” property was correctly set in one test it will be correct in another (and therefore not tested).  The following improved unit tests verifies the desired behaviour of the object in full and in the process of this verification covers 100% of the code.  It is this change in priority that is critical when designing and developing your unit tests.  Only when all the logic paths through your code are tested are your unit tests complete and at this point you should by default have 100% code coverage.

namespace CodeCoverageExample
{
using System;
using Microsoft.VisualStudio.TestTools.UnitTesting;
using NUnit.Framework;
using Assert = NUnit.Framework.Assert;

[TestClass]
class GoodCodeCoverage
{
private const decimal LOWEST_VALID_SCORE = 0;
private constdecimal HIGHEST_VALID_SCORE = 100;
private const decimal PASSMARK = 80;

[TestMethod]
public void ExamResult_BadArgumentException_UpperLimit()
{
var ex = Assert.Throws<ArgumentOutOfRangeException>(() => new ExamResult(HIGHEST_VALID_SCORE + 1));
Assert.That(ex.ParamName, Is.EqualTo("score"));
}

[TestMethod]
public void ExamResult_BadArgumentException_LowerLimit()
{
var ex = Assert.Throws<ArgumentOutOfRangeException>(() => new ExamResult(LOWEST_VALID_SCORE - 1));
Assert.That(ex.ParamName, Is.EqualTo("score"));
}

[TestMethod]
public void ExamResult_DeterminePassed_HigherLimit()
{
AssertCall(HIGHEST_VALID_SCORE, true);
}

[TestMethod]
public void ExamResult_DeterminePassed_LowerLimit()
{
AssertCall(PASSMARK, true);
}

[TestMethod]
public void ExamResult_DetermineFailed_HigherLimit()
{
AssertCall(PASSMARK - 1, false);
}

[TestMethod]
public void ExamResult_DetermineFailed_LowerLimit()
{
AssertCall(LOWEST_VALID_SCORE, false);
}

private void AssertCall(decimal score, bool result)
{
var examResult = new ExamResult(score);
Assert.That(examResult.Score, Is.EqualTo(score));
Assert.That(examResult.Passed, Is.EqualTo(result));
}

}
}

Additional Comment: Whilst working through these examples I considered exposing the “pass-mark” constant held in the “ExamResult” object so it could be used within our unit test.  In certain situations that could be acceptable (or even desirable).  However unless there is a requirement it is probably better to keep the two separate as this requires that the unit test explicitly defines the pass / fail point that it is testing.

Comments

Popular posts from this blog

Why do my Android Notification only appear in the status bar?

I'm definitely getting back into Android development, I'm remembering that feeling of 'Surely this should be easier than this!'. All I wanted to do was to schedule a local notification which behaved similar to a push notification pop-up. That is, as well as showing the small icon in the status bar I wanted it to pop up on screen to notify the end user. All seems fairly easily, I found this code for how to schedule a notification. That all worked perfectly, apart from the notification would only appear in the status bar. Searching around I found loads of different answers / solutions, mostly all saying the same thing:It only worked if you used 'NotificationCompat.Builder' in place of 'Notification.Builder', orYou had to set the priority to 'NotificationCompat.PRIORITY_HIGH'As usually happens, none of these solutions worked for me until I added in the missing piece of the jigsaw:- '.setDefaults(Notification.DEFAULT_ALL)'. For me this…

IPhone hangs when running from XCode

I've had this happen a couple of times now and the first time was a little worrying that I'd bricked my iPhone. Basically I was running an application on my phone via XCode and when rebuilding an updated version it failed with a "busy" error message. Stopping XCode and unconnecting my phone had no effect, the phone was stuck displaying the loading screen of the application and wouldn't respond to any key commands. To fix you have to hard reboot, holding the power and home button until the phone reboots - doesn't lose any of the data you have on your phone (a concern the first time I did it).

Do "Task Hours" add anything in Scrum (Agile)?

What do task hours add to the overall process in scrum?This was a question that has arisen from all team members in both instances that I've helped teams switch over to scrum. The benefits of artifacts like the comparative story point estimation, the 2 week sprints, stand-ups and the end of sprint demo have been self evident to the team, but as one I think every team member has expressed dismay when it comes to task planning and estimating each task in hours. Left unchecked there is a natural tendency for people to actually begin to dread the start of each sprint purely due to the task planning session.In my current role we've been lucky to investigate this further as a team.The team sat down to discuss the problems it was experiencing with estimating tasks in hours and the following common themes appeared:It is hard: Maybe it shouldn't be, but time estimation is hard! Story points are comparative and abstracted making them easier to determine, but time estimate is gen…