Tuesday, 11 October 2011

nLog - Updating ExceptionLayoutRenderer to capture “Exception.Data”

Something that appears to be missing from many logging frameworks is the ability to log/report the “data” collection of any thrown exceptions. I originally encountered this issue with log4net, and have recently found nLog appears to be lacking the same desired functionality. As our current project looks like it will be switching to using nLog I’ve had a dig around in the source code to see how easily this functionality can be added.

Basically for a quick and dirty solution the desired functionality can be added by adding the following function to the class “ExceptionLayoutRenderer”:

private static void AppendData(StringBuilder sb, Exception ex)
{
var separator = string.Empty;
foreach (var key in ex.Data.Keys)
{
sb.Append(separator);
sb.AppendFormat("{0}: {1}", key, ex.Data[key]);
separator = " ";
}
}

And the referencing this new function inside the “switch” statement of “CompileFormat”:

case "DATA":
dataTargets.Add(AppendData);
break;

Finally to use this new functionality, in the configuration section add “data” to the ${exception:format=} comma delimited list any where you want to report on exception data. Right now the formatting is pretty basic, I want to play some more to see what options are available to improve the rendering options whilst adhering to the existing nLog configuration model. It may be easier to add a custom LayoutRenderer just for exception data that can be held outside of the nLog DLL, removing the requirement to reference a custom build of the DLL (which probably breaks NuGet package deployments, etc).

Friday, 26 August 2011

SQL Agent Immediately Stops

We just run into an interesting problem where starting the SQL Server Agent would start and then immediately stop. No errors were reported in the event log, but running the following via the command line returned "StartServiceCtrlDispatcher failed (error 6)"

"[[your SQL Path]]\Binn\SQLAGENT.EXE" -i [[sql Instance]]

Googling the error in question returned this forum post which contained the solution. We had reinstalled the service and the account that we were running under did not have the permissions to update / overwrite something (it wasn't the error file in question). Running the agent under a different account solved the issue; would be good to look into what the exact problem we are encountering is, but it's enough that it's running for us right now.

Tuesday, 23 August 2011

Exceptions inside IComparer.Compare(x, y)

When writing or using an implementation of IComparer.Compare(x,y) you encounter the following error message:

Unable to sort because the IComparer.Compare() method returns inconsistent results. Either a value does not compare equal to itself, or one value repeatedly compared to another value yields different results

It is highly likely that the code within the "Compare" statement is throwing an exception. We encountered this problem when trying to access an array out of bounds in a particular scenario. Updating our tests and our code to return a correct result in this scenario fixed the issue for us.

Saturday, 2 April 2011

Unit Test Code Coverage

It seems that a common aim when first starting out in unit testing is to obtain 100% code coverage with our unit tests.  This single metric is the defining goal and once obtained a new piece of functionality is targeted.  After all if you have 100% code coverage you can’t get better than that, can you?

It’s probably fair to say that it’s taken me several years and a few failed attempts at test driven development (TDD) to finally understand why when production code fails it can still occur in code that is “100%” covered by tests!  At it’s most fundamental level this insight comes from realising that “100% code coverage” is not the aim of well tested code, but a by-product!

Consider a basic object “ExamResult” that is constructed with a single percentage value.  The object has a read only property returning the percentage and a read only bool value indicating a pass/fail status.  The code for this basic object is shown below:

namespace CodeCoverageExample
{
using System;

public class ExamResult
{
private const decimal PASSMARK = 75;

public ExamResult(decimal score)
{
if (score < 0 || score > 100)
{
throw new ArgumentOutOfRangeException("score", score, "'Score' must be between 0 and 100 (inclusive)");
}

this.Score = score;
this.Passed = DetermineIfPassed(score);
}

public decimal Score { get; private set; }

public bool Passed { get; private set; }

private static bool DetermineIfPassed(decimal score)
{
return (score >= PASSMARK);
}
}
}

For the code above, the following tests would obtain the magic“100% code coverage” figure:

namespace CodeCoverageExample
{
using System;
using Microsoft.VisualStudio.TestTools.UnitTesting;
using NUnit.Framework;
using Assert = NUnit.Framework.Assert;

[TestClass]
public class PoorCodeCoverage
{
[TestMethod]
public void ExamResult_BadArgumentException()
{
var ex = Assert.Throws<ArgumentOutOfRangeException>(() => new ExamResult(-1));
Assert.That(ex.ParamName, Is.EqualTo("score"));
}

[TestMethod]
public void ExamResult_DeterminePassed()
{
Assert.IsTrue(new ExamResult(79).Passed);
}

[TestMethod]
public void ExamResult_DetermineFailed()
{
Assert.IsFalse(new ExamResult(0).Passed);
}
}
}

Note: The testing examples in these blog posts use both MSTest and NUnit.  By decorating the class with MSTest attributes you will get automated test running in TFS continuous integration “out of the box”.  Aliasing “ASSERT” to the NUnit version allows access to NUnit version of this command (which I was originally more familiar with and still prefer).

Running any code coverage tool will clearly show all the paths are being tested but are you really protected against modifications introducing unintended logic changes?  This can be checked by running through a few potential situations:  Changing the pass mark to 80% would cause the above unit tests to fail, but reducing it to 1% wouldn’t.  If you consider the main purpose of the unit test is to verify that the exam result is correctly determined (and the potential consequences in the “real world” if it is not) then it would imply that this sort of check is not fit for purpose.  In the scenario it is critical that edge cases are tested – these are the points in which a result passes from being a failure to a pass and similarly from being a valid result to invalid (you can’t score less than 0% or more than 100%).  Similarly short cuts should not be taken in asserting the state of the object under test in each individual test - don’t assume because the “Result” property was correctly set in one test it will be correct in another (and therefore not tested).  The following improved unit tests verifies the desired behaviour of the object in full and in the process of this verification covers 100% of the code.  It is this change in priority that is critical when designing and developing your unit tests.  Only when all the logic paths through your code are tested are your unit tests complete and at this point you should by default have 100% code coverage.

namespace CodeCoverageExample
{
using System;
using Microsoft.VisualStudio.TestTools.UnitTesting;
using NUnit.Framework;
using Assert = NUnit.Framework.Assert;

[TestClass]
class GoodCodeCoverage
{
private const decimal LOWEST_VALID_SCORE = 0;
private constdecimal HIGHEST_VALID_SCORE = 100;
private const decimal PASSMARK = 80;

[TestMethod]
public void ExamResult_BadArgumentException_UpperLimit()
{
var ex = Assert.Throws<ArgumentOutOfRangeException>(() => new ExamResult(HIGHEST_VALID_SCORE + 1));
Assert.That(ex.ParamName, Is.EqualTo("score"));
}

[TestMethod]
public void ExamResult_BadArgumentException_LowerLimit()
{
var ex = Assert.Throws<ArgumentOutOfRangeException>(() => new ExamResult(LOWEST_VALID_SCORE - 1));
Assert.That(ex.ParamName, Is.EqualTo("score"));
}

[TestMethod]
public void ExamResult_DeterminePassed_HigherLimit()
{
AssertCall(HIGHEST_VALID_SCORE, true);
}

[TestMethod]
public void ExamResult_DeterminePassed_LowerLimit()
{
AssertCall(PASSMARK, true);
}

[TestMethod]
public void ExamResult_DetermineFailed_HigherLimit()
{
AssertCall(PASSMARK - 1, false);
}

[TestMethod]
public void ExamResult_DetermineFailed_LowerLimit()
{
AssertCall(LOWEST_VALID_SCORE, false);
}

private void AssertCall(decimal score, bool result)
{
var examResult = new ExamResult(score);
Assert.That(examResult.Score, Is.EqualTo(score));
Assert.That(examResult.Passed, Is.EqualTo(result));
}

}
}

Additional Comment: Whilst working through these examples I considered exposing the “pass-mark” constant held in the “ExamResult” object so it could be used within our unit test.  In certain situations that could be acceptable (or even desirable).  However unless there is a requirement it is probably better to keep the two separate as this requires that the unit test explicitly defines the pass / fail point that it is testing.

Tuesday, 22 February 2011

Improving “Boiler Plate” Data-Reader Code – Part 2

In Part 1 of this series we started with a basic Data-Reader / SQL Connection/Command pattern and illustrated how it is possible to abstract the parsing of the Data Reader into a standalone object that can be fully unit tested in isolation of the calling code.   In Part two of the series we will highlight a very simple optimisation that can be made to the “DataReader” convertor and the required update to the tests to capture/verify the changes.  In this revision the original “CustomerDRConvertor” has been updated to include extremely basic caching, which for the duration of the object’s existence should ensure that only the first call needs to reference the “GetOrdinal(…)” method to find the element index of each desired column.  Subsequent calls can then use this “cached” index to reference the column by position rather than name.

namespace DataAccess.Example
{
using System.Data;
using System.Data.BoilerPlater;

public class CustomerDRConvertorPart2 : IConvertDataReader<Customer>
{
private int idIndex = -1;
private int firstNameIndex = -1;
private int surnameIndex = -1;

public Customer Parse(IDataReader dataReader)
{
if (idIndex == -1)
{
idIndex = dataReader.GetOrdinal("Id");
firstNameIndex = dataReader.GetOrdinal("FirstName");
surnameIndex = dataReader.GetOrdinal("Surname");
}

return new Customer
{
Id = dataReader.GetGuid(idIndex),
FirstName = dataReader.GetString(firstNameIndex),
Surname = dataReader.GetString(surnameIndex)
};
}
}
}

In traditional ASP applications (back in the day) the above caching pattern used to result in reasonable performance gains.   I’ve not looked into the benefits within a modern day .NET application and in some instances could be classed as premature optimisation.  But for the purpose of this example it provides a perfect illustration as to how the abstracting the data reader parsing from the connection/command code can provide many benefits.  Updated objects can be developed and tested in complete isolation of the existing code and then plugged into the code base with only minimal changes.

This updated code can be verified using the unit test below.  In the test the “Parse(…)” method is called once and the mocked objects are verified that they were called correctly.  The “Parse(…)” method is then called again and the mocked objects verified to make sure that the second call only resulted in an additional call to the GetGuid(…) and GetString(…) methods.  Due to the very basic caching that was implemented there is no need for the second call to make any GetOrdinal(…) references, which the verification of the mocked objects can confirm.  The tests verify the expected behaviour, not the inner workings of any implementation of a DataReader object.

namespace DataAccess.Example.Tests
{
using System;
using System.Data;
using Microsoft.VisualStudio.TestTools.UnitTesting;
using NUnit.Framework;
using Moq;
using Assert = NUnit.Framework.Assert;

[TestClass]
public class CustomerDRConvertorPart2Tests
{
[TestMethod]
public void CustomerDRConvertor_GoodCall()
{

var dataReader = new Mock<IDataReader>();

dataReader.Setup(dr=>dr.GetOrdinal("Id")).Returns(1);
dataReader.Setup(dr=>dr.GetOrdinal("FirstName")).Returns(2);
dataReader.Setup(dr=>dr.GetOrdinal("Surname")).Returns(3);

var id = Guid.NewGuid();
const string firstName = "John";
const string surname = "Doe";

dataReader.Setup(dr=>dr.GetGuid(1)).Returns(id);
dataReader.Setup(dr=>dr.GetString(2)).Returns(firstName);
dataReader.Setup(dr=>dr.GetString(3)).Returns(surname);

var convertor = new CustomerDRConvertorPart2();

var customer = convertor.Parse(dataReader.Object);

Assert.That(customer.Id, Is.EqualTo(id));
Assert.That(customer.FirstName, Is.EqualTo(firstName));
Assert.That(customer.Surname, Is.EqualTo(surname));

dataReader.Verify(dr=>dr.GetOrdinal(It.IsAny<string>()), Times.Exactly(3));
dataReader.Verify(dr=>dr.GetOrdinal("Id"), Times.Once());
dataReader.Verify(dr=>dr.GetOrdinal("FirstName"), Times.Once());
dataReader.Verify(dr=>dr.GetOrdinal("Surname"), Times.Once());

dataReader.Verify(dr=>dr.GetGuid(It.IsAny<int>()), Times.Once());
dataReader.Verify(dr=>dr.GetGuid(1), Times.Once());

dataReader.Verify(dr=>dr.GetString(It.IsAny<int>()), Times.Exactly(3));
dataReader.Verify(dr=>dr.GetString(2), Times.Once());
dataReader.Verify(dr=>dr.GetString(3), Times.Once());

convertor.Parse(dataReader.Object);

dataReader.Verify(dr=>dr.GetOrdinal(It.IsAny<string>()), Times.Exactly(3));
dataReader.Verify(dr=>dr.GetOrdinal("Id"), Times.Once());
dataReader.Verify(dr=>dr.GetOrdinal("FirstName"), Times.Once());
dataReader.Verify(dr=>dr.GetOrdinal("Surname"), Times.Once());

dataReader.Verify(dr=>dr.GetGuid(It.IsAny<int>()), Times.Exactly(2));
dataReader.Verify(dr=>dr.GetGuid(1), Times.Exactly(2));

dataReader.Verify(dr=>dr.GetString(It.IsAny<int>()), Times.Exactly(4));
dataReader.Verify(dr=>dr.GetString(2), Times.Exactly(2));
dataReader.Verify(dr=>dr.GetString(3), Times.Exactly(2));
}
}
}

In part three of this series I will cover how the above code can be moved into an abstract base class for data access that all inheriting classes can utilise through interfaces and generics.

Part 3 builds on the code developed in parts 1 & 2 into a usable solution.

Monday, 21 February 2011

Improving “Boiler Plate” Data-Reader Code – Part 1

Recently we’ve been looking at improving our unit test code coverage and reducing the amount of duplicated code around our bespoke data access layer.  Where possible we have moved over to NHibernate but certain parts of the data access must still be written using the standard ADO.NET connection/command pattern.  Typically hidden right in the middle of this bespoke code is a while loop that is pivoting a data reader into a POCO that is impossible to repeat-ably unit test in a stable environment unless you set up a dedicated data repository for testing or try to wrap up / mock the connection / command objects.   Neither of these options are really desirable as we aren’t really interested in testing / mocking the .NET provider data access objects.

To get around this issue we looked into how we could generate some boiler-plate code that we could roll out across our code base.  This code base will be introduced step by step during this series with the first step covering the abstraction of the data reader processing into a standalone object that can be tested in isolation of the data access code.   The example code that we are looking to migrate is shown below, a typical unrestricted “Get()” call (in this example we don’t have many customers!).

public IEnumerable<Customer> Get()
{
using(var connection = new SqlConnection(this.config.ConnectionString))
{
using(var command = connection.CreateCommand())
{
command.CommandText = "SELECT id, Firstname, Surname FROM Customer";
connection.Open();

using (var dataReader = command.ExecuteReader())
{
while(dataReader.Read())
{
yield return new Customer
{
Id = dataReader.GetGuid(dataReader.GetOrdinal("Id")),
FirstName = dataReader.GetString(dataReader.GetOrdinal("FirstName")),
Surname = dataReader.GetString(dataReader.GetOrdinal("Surname"))
};
}
}
}
}
}

Hidden below (and behind) a concrete implementation of a SqlConnection and and SqlCommand is the code that we are interested in right now:

yield return new Customer
{
Id = dataReader.GetGuid(dataReader.GetOrdinal("Id")),
FirstName = dataReader.GetString(dataReader.GetOrdinal("FirstName")),
Surname = dataReader.GetString(dataReader.GetOrdinal("Surname"))
};

As previously stated, unless we set up a test repository or find a way to wrap / mock these concrete instances we are unable to test the creation from the data reader of the customer POCO. To rectify this we start with an interface defining how we would like the calling code to convert the passed in data reader:

namespace System.Data.BoilerPlate
{
public interface IConvertDataReader<out T>
{
T Parse(IDataReader dataReader);
}
}

Taking a very basic customer POCO object:

namespace DataAccess.Example
{
using System;

public class Customer
{
public Guid Id { get; set; }
public string FirstName { get; set; }
public string Surname { get; set; }
}
}

An implementation of the IConvertDataReader interface can be created quite simply using the code below.  The validation / error checking around each data reader value conversion could be as simple or as complex as you need. Part two of this series will cover a basic optimisation that can be made to this code to potentially speed up access in repeated calls such as a “while(dataReader.Read())” loop.

namespace DataAccess.Example
{
using System.Data;
using System.Data.BoilerPlate;

public class CustomerDRConvertor : IConvertDataReader<Customer>
{
public Customer Parse(IDataReader dataReader)
{
return new Customer
{
Id = dataReader.GetGuid(dataReader.GetOrdinal("Id")),
FirstName = dataReader.GetString(dataReader.GetOrdinal("FirstName")),
Surname = dataReader.GetString(dataReader.GetOrdinal("Surname"))
};
}
}
}

This data reader convertor can now be unit tested in isolation of the code that will be implementing it, using a combination NUnit and Mock to both assert actual returned results and verify expected behaviour. It’s probably worth highlighting at this stage we are using a combination of MsTest and NUnit, doing things this way brings the best of both worlds – in TFS you get automated Unit Testing as part of the CI build as you are referencing MsTest, but by adding an alias to NUnit’s assert you are getting access to NUnit’s fluent API (which we prefer).

namespace DataAccess.Example.Tests
{
using System;
using System.Data;
using Microsoft.VisualStudio.TestTools.UnitTesting;
using NUnit.Framework;
using Moq;
using Assert = NUnit.Framework.Assert;

[TestClass]
public class CustomerDRConvertorTests
{
[TestMethod]
public void CustomerDRConvertor_GoodCall()
{
var dataReader = new Mock<>IDataReader>();
dataReader.Setup(dr => dr.GetOrdinal("Id")).Returns(1);
dataReader.Setup(dr => dr.GetOrdinal("FirstName")).Returns(2);
dataReader.Setup(dr => dr.GetOrdinal("Surname")).Returns(3);

var id = Guid.NewGuid();
const string firstName = "John";
const string surname = "Doe";

dataReader.Setup(dr => dr.GetGuid(1)).Returns(id);
dataReader.Setup(dr => dr.GetString(2)).Returns(firstName);
dataReader.Setup(dr => dr.GetString(3)).Returns(surname);

var convertor = new CustomerDRConvertor();
var customer = convertor.Parse(dataReader.Object);

Assert.That(customer.Id, Is.EqualTo(id));
Assert.That(customer.FirstName, Is.EqualTo(firstName));
Assert.That(customer.Surname, Is.EqualTo(surname));

dataReader.Verify(dr => dr.GetOrdinal(It.IsAny<string>()), Times.Exactly(3));
dataReader.Verify(dr => dr.GetOrdinal("Id"), Times.Once());
dataReader.Verify(dr => dr.GetOrdinal("FirstName"), Times.Once());
dataReader.Verify(dr => dr.GetOrdinal("Surname"), Times.Once());
dataReader.Verify(dr => dr.GetGuid(It.IsAny<int>()), Times.Once());
dataReader.Verify(dr => dr.GetGuid(1), Times.Once());

dataReader.Verify(dr => dr.GetString(It.IsAny<int>()), Times.Exactly(3));
dataReader.Verify(dr => dr.GetString(2), Times.Once());
dataReader.Verify(dr => dr.GetString(3), Times.Once());
}
}
}

Obviously the above is a very basic test, but any solution can be scaled to reflect any addition or complex processing that may take place in the implementation of Parse(IDataReader dataReader) method. In fact in part two of this series we will highlight how we can implement some potential implementation and verify the expected behaviour whilst confirming that the returned results have not changed.

Finally to wrap up part one of this series here is the original code updated to reflect the changes discussed here. We’ve not really changed a lot, but are now able to unit test code that was previously difficult to easily reach.


public IEnumerable<Customer> Get()
{
using (var connection = new SqlConnection(this.config.ConnectionString))
{
using (var command = connection.CreateCommand())
{
command.CommandText = "SELECT id, Firstname, Surname FROM Customer";
connection.Open();
using (var dataReader = command.ExecuteReader())
{
var convertor = new CustomerDRConvertor();
while (dataReader.Read())
{
yield return convertor.Parse(dataReader);
}
}
}
}
}

Part 2 of this series shows how the code covered in this post can easily be optimised to increase performance, with tests updated to reflect the change.

Saturday, 5 February 2011

MVC3.0 Installation Hangs

I've just installed MVC3.0 on a fresh PC using the new web installer application and was surprised at how long it seemed to be taking.  Digging around a bit deeper I remembered that in the options I'd selected to use IIS rather than IIS Express and taking a quick look at the service panel highlighted that IIS was currently stopped.  I restarted IIS and the MVC3.0 installation finished in seconds!

So if you're having problems installing MVC3.0 then just take a moment to check that IIS is started.

Friday, 4 February 2011

VS2010: How to change a "Class Library" project into a "Test Project"

Whilst working with VS2010 projects it can be really frustrating if you accidently create your unit testing projects as class libraries (or migrate an existing class library into a unit testing library mainly because the context sensitive "Add New" menu no longer contains the "New Test" option.  This can be easily fixed by directly amending the project file, adding the following key to main tag:

<ProjectTypeGuids>{3AC096D0-A1C2-E12C-1390-A8335801FDAB};{FAE04EC0-301F-11D3-BF4B-00C04F79EFBC}</ProjectTypeGuids>

Reload the project in VS2010 and now your class library has become a testing library and you have "Add New Test" option(s) back again.

Thursday, 27 January 2011

Accessing GAC DLL's in Visual Studio 2010 "Find References"

Or better titled I've just registered my DLL in the GAC, why does it not appear in Visual Studio's "Find Reference" dialog.

It's probably the first thing people notice after they've started using the GAC for the first time. You've managed to register your DLL and you've confirmed it is in the GAC, but it just won't appear in Visual Studio's "Add Reference" dialog box.   As it turns out, for what were probably good reasons, the visual studio team decided to use the registery to hold the list of DLLs that appear in the "Add Reference" dialog box.

You have two options, the first is to manually edit the registry - adding your DLL to the following registery keys:

  • [HKEY_LOCAL_MACHINE\SOFTWARE\Wow6432Node\Microsoft\.NETFramework\AssemblyFolders]
  • [HKEY_CURRENT_USER\SOFTWARE\Wow6432Node\Microsoft\.NETFramework\AssemblyFolders]

Another option is to install the Visual Studio extension MUSE.VSExtensions, which adds an extra option to the context sensitive menu when you right mouse click on "References".  The exta option is "Add GAC Reference" which pops up a new dialog box listing all the items currently held in the GAC - no messing around in the registery each time you add something new to the GAC.  It is worth noting that the search is case sensitive.

See Adding a DLL to the GAC in .NET 4.0 for more information on adding DLL's to the GAC in .NET 4.0 and differences between this and earlier versions of the framework.

Adding a DLL to the GAC in .NET 4.0

GACUTIL.EXE is no longer supplied / installed along with Visual Studio, instead it has been moved into the Windows SDK v7.1.   So to install a DLL into the GAC for .NET 4.0 (or anything after v1.1 I believe) you must first download and install the SDK.  Once you have done this you must locate the relevant version of GACUTIL from the four possible locations:

  • Program Files\Microsoft SDKs\Windows\v7.1\Bin\
  • Program Files\Microsoft SDKs\Windows\v7.1\Bin\x64
  • Program Files\Microsoft SDKs\Windows\v7.1\Bin\NETFX 4.0 Tools
  • Program Files\Microsoft SDKs\Windows\v7.1\Bin\NETFX 4.0 Tools\x64

The first two are for registering DLLs to be used in .NET 3.5 or earlier, with the first being for x86 and the second for x64 versions.

The second two are for registering DLLs to be used in .NET 4.0, again with the first being for x86 and the second for x64 versions.

With the release of .NET 4.0 the framework now has two GACs, this stackoverflow post explains why.

Tuesday, 25 January 2011

Javascript: Type and Value Comparison

A common problem when doing value comparison inside of Javascript is the automatic type conversion that happens for you, this means all the following statements resolve to "true".

  • 1 == 1;
  • 1 == '1';
  • 1 == "1";

As a little bonus snippet in a post by Steve Wellens he provides an answer to the problem, the triple equals ("===") and it's corresponding not equal ("!==").  These comparison operators perform a type check as well as a value check.  This means that a string value will nolonger equal it's corresponding numeric value as the type check will return false.  A useful bit of functionality to have.

Sunday, 23 January 2011

SQL Server Licensing

As part of my current project I’ve spent some time over the past couple of months trying to determine the best (cheapest) SQL Server configuration to support web servers running in a virtualised environment. As a quick disclaimer, the following are my thoughts on the subject and should be used as guidance for further research only!

Firstly you need to figure out whether you are going to license using the “per user” or “per processor” model. For most typical configurations a “break even” point can be determined when it becomes cheaper to switch to the per processor licensing model instead of the per user model. It is important however to plan for future growth as it can be very expensive to try and switch from one model to another once a system has been deployed. It is not possible to convert SQL “user” CALs into a processor license.

If you are developing a web application that will be exposed via the internet to external customers then it would probably make sense to use the web edition, which only comes with “per processor” licensing. The Microsoft definition of what is a user is critical when selecting the web edition as this can not be used for intranet based applications used by company employees. See the licensing section for more information.

To aid in the selection of the correct edition Microsoft have put together the following SQL Server 2008 Comparison Table

It is also worth considering the environment that is going to host the SQL Server instance? If you intend to host SQL Server in a virtualised environment things can quickly become confusing and potentially (yet again) expensive.  Even the Microsoft FAQ on SQL Licensing appears to contradict itself – the answer to the question “What exactly is a processor license and how does it work?” appears to state that you only need to buy a license per physical processor even for virtualised environments.  However, the answer to a later question “How do I license SQL Server 2008 for my virtual environments?” then contradicts this by giving a more detailed answer highlighting that in a virtualised environment the definition of the “per processor” model changes depending upon which edition of SQL Server you have purchased.

You then need to dig around the Microsoft site a little more to investigate the licensing model a little more – the Licensing Quick Reference PDF provides a lot more information from page 3 onwards.  I won’t duplicate the information held in that document, because if nothing else it may be updated over time but it is worth noting the differences in Data Centre / Enterprise editions and Standard edition. For the standard edition the document then moves on to describe the formula that should be used to determine the number of per processor licenses that should be purchased for each virtualised instance of SQL Server. At the time of writing you divide the number of virtual processors with the number of cores on the physical processor (rounding up) to get the number of SQL licenses needed. This does mean that if you have a 4 core physical CPU and you expose each core as a virtual CPU to the instance of SQL Server, you still only need one “per proc” license. A common misconception is that you must have a “per proc” license for each virtualised CPU exposed to SQL Server, but this hopefully clears up that potentially costly misunderstanding.

Resolving the HTTP/HTTPS “Document contains both secure/unsecure content” message

One of the most commonly encountered issues when developing websites that must support both HTTP and HTTPS pages is the warning that a secure page “contains both secure and unsecure content”. In a nutshell this is when a page that is being displayed via the HTTPS protocol contains one or more references to additional resources (JS/CSS/Images) using just HTTP.

The solution is easy and well documented for locally referenced resources, in that when making the reference to the required file you exclude the protocol and domain name ending up with something like “/styles/main.css”, which will quite happily call “main.css” from a “styles” folder in the root of the web application regardless of whether the containing page is being called by HTTP or HTTPS.

Note: .NET provides functionality such as ResolveUrl(“~/styles/main.css”) which should always be used in preference to hard coded paths such as “/styles/main.css”. Using ResolveUrl(…) will work regardless whether in IIS the web application has been set up as a website or a child application of the parent website. However the hardcoded path will always default to the root of the website, so if your web application has been deployed within IIS as a child application of a website you will typically see lots of 404’s as the browser is unable to locate the resources using the URL provided.

Typically externally referenced resources have proved more troublesome. I’ve seen quick and dirty solutions that always call external references via HTTPS. This gets around the warning when viewing the calling page in HTTPS, but introduces additional processing/overheads when viewing the referencing page in HTTP. More typically the URL for the required external resource is passed to a helper functional which determines which protocol to add depending upon whether the containing page is HTTP or HTTPS.

But it would appear that all the solutions for resolving the protocol for externally referenced resources were over-engineered! Under the section A Better Solution, Dave Ward provides a solution that is pretty much identical to the tried and tested internally referenced resource solution. Basically the RFC 3986 spec allows resources to be referenced using a protocol-less format. So instead of worrying whether you should be calling “http://cdn.domain/common-resouce.js” or “https://cdn.domain/common-resouce.js”, you can just call “//cdn.domain/common-resouce.js” and the protocol to be used for the external reference is determined by the protocol context of the containing page! Thanks for highlighting this Dave, much easier!