Wednesday, 15 December 2010

ASP.NET (IIS) broken after installing updates

It looks like the recent Microsoft updates broke ASP.NET / IIS on my development machine.  Trying to launch any website hosted on my local IIS server returned a 500 error and the following error message: 

Calling LoadLibraryEx on ISAPI filter "C:\Windows\Microsoft.NET\Framework\v4.
0.30319\aspnet_filter.dll" failed 

And checking the event log just returned a similar error message: 

ISAPI Filter 'c:\Windows\Microsoft.NET\Framework\v4.0.30319\aspnet_filter.dll' could not be loaded due to a configuration problem. The current configuration only supports loading images built for a AMD64 processor architecture. The data field contains the error number. To learn more about this issue, including how to troubleshooting this kind of processor architecture mismatch error, see http://go.microsoft.com/fwlink/?LinkId=29349. 

After doing a bit of digging around on the web, the solution to this problem was to simply re-register IIS using the following command: 

c:\Windows\Microsoft.NET\Framework64\v4.0.30319>aspnet_regiis -r 

Everything then thankfully burst back into life!

Tuesday, 14 December 2010

Beware of Automapper modifying protected fields

I've just spent a good couple of hours trying to debug what appeared to be a corrupt session in NHibernate.  The unit tests would work, isolating the NHibernate code causing the problem worked, but within the application when calling Session.Save() the code always complained about a corrupt session.  Finally after much investigation and head scratching the problem was traced to an earlier call to AutoMapper.   This code was mapping an ID in the source to a UserID in the target - which had been correctly defined in the mapping file and worked without any problems.  What I had missed however was that AutoMapper was mapping the source ID to the "protected" target ID which NHibernate was using as it's entity key.  When using the entity directly you can't change the ID as it is protected (so it can safely stay under the control of NHibernates).  However automapper obviously uses reflection so passes the "protected" status of the property and in doing so was breaking referential intergity.  In the end all that was needed was to update the mapping to ignore / use destination value for the mapping of ID on destination.  Problem Solved.

Thursday, 18 November 2010

SQL Server: Checking @@ROWCOUNT and @@ERROR

The following SQL contains a subtle bug that will always result in the text “No Rows Affected” being output.

SELECT 'Hard Coded Row'
IF(@@ERROR <> 0)
Print 'Error Occured'
ELSE IF(@@ROWCOUNT = 0)
Print 'No Rows Affected'

The error occurs because the reference to @@ERROR in the first “IF” statement counts as a SQL statement; resetting the value held in @@ROWCOUNT.   As the second “IF” statement checking @@ROWCOUNT is only evaluated if the first “IF” statement (@@ERROR) it will always return true!   Note: Reversing the order of the two IF statements would hide any potential errors, as @@ERROR would be reset upon checking @@ROWCOUNT.

The safest way to evaluate this statement is to SELECT the contents of @@ERROR and @@ROWCOUNT into local variables within a single statement and then check the values of the local variables, in other words:

DECLARE @ErrorCode INT
DECLARE @RowsAffected INT

SELECT 'Hard Coded Row'
SELECT @ErrorCode = @@ERROR, @RowsAffected = @@ROWCOUNT

IF(@ErrorCode <> 0)
Print 'Error Occured'
ELSE IF(@RowsAffected = 0)
Print 'No Rows Affected'

Problem solved!

Monday, 25 October 2010

The Continual SQL battle of Reindex and Shrink DB.

Over the weekend I went along to the fantastic DeveloperDeveloperDeveloper day held at Microsoft up in Reading.  The quality of the presentations was fantastic and even for frameworks/subjects I was familar with I learnt loads.   For me, the most useful presentation came at the end of the day (just as the brain was beginning to shut down due to information overload).  Simon Sabin (from SQL Know How) presented "Things you should know about SQL as a developer".  
A snapshot of his presentation can be found on his blog, but for me the most enlighten point was the battle between reindexing a table and shrinking a database.   Typically the two feel like they should go hand in hand, but they will in fact fight each other.   Re-indexing a table will create a new copy of the table in question, with the data correctly ordered to reduce fragmentation.  As the original data is just marked as available, the process of re-indexing a table will potentially result in a database growing by the size of the table being re-indexed.  Once the re-index process has completed the database will report all the space taken by the old copy of the table as free space.  
However, if at this point if you shrink the database you will refragment your nicely re-indexed table as the shrinking process reads/moves the last row of the table first.  This means your table content will end up reversed (and totally fragmented) after a shrink comand.
There are many reasons why you shouldn't shrink your database, this being just one of them!

Thursday, 14 October 2010

Changing the behaviour of a mocked object

By default objects mocked using MOQ will handle calls to method / properties that have not been defined, which 99% of the time will be the desired behaviour.  Occasionally however a situation will require that MOQ throws an exception when accessing a undefined method or property.  The MOQ framework provides this functionality via the MockBehavior enum but this can only be set during the construction of the mocked object.  Typically this won’t cause any problems as the mocked objects are created inline but we’ve just encountered a scenario that needed a little lateral thinking as for reusability we have objects that inherit from Mock<>.   These mocked objects already had constructor parameters which (in the short term) we didn’t wish to change so to get around this we created a static property on the mocked object that is then used to inject into the base MOQ constructor.

public class MockHttpContext : Mock<HttpContextBase>
{
static MockHttpContext()
{
MockBehavior = MockBehavior.Default;
}

public static MockBehavior MockBehavior { get; set; }

public MockHttpContext(bool shareCommonHttpCookieCollection = true)
: base(MockBehavior)
{
...
}

...
}

Note: Being a static property it does mean that we have to remember to reset the value after constructing the instance in question, but in the short term it did resolve the issue of how to change the MockBehaviour without modifying the mocked objects constructor.

Tuesday, 28 September 2010

Why you can’t assert a return type of an MVC Controller Action using reflection!

I recently asked this question on Stack Overflow.  I was using reflection and MethodInfo to confirm that a controller contained an action with the required method parameters.  If one was found, this would then allow my code to unit test that the correct Action Filter Attributes were being assigned to the controller action (so a particular action could only respond to an HTTP GET or POST as an example).  

This was all working fantastically well but I made the mistake of trying a unit test too far and was surprised by the outcome.  I was attempting to assert/validate the return type was of the desired type (e.g. ActionResult or JSONResult) but my assertion was failing.  In the debug watch window I could see the return parameter was of the desired type but it was wrapped up inside a RuntimeType.  No amount of fiddling / casting would get the test to pass.  Another SO user helpfully pointed out that an MVC action might actually return RedirectToRouteResult, etc. so the wrapping was a requirement of the call – it was working as intended.

Thinking this through to it’s conclusion my original test was flawed, even if it was possible there is no need to test the return type of the reflected method info.   Only when I am testing actual calls to the controller am I interested in what the returned type is and what it contains!

del.icio.us Tags: ,

Team Foundation Server 2010 Licensing

UPDATE: Since putting this article together MS have released their VS2010 and MSDN Licensing White Paper, which whilst it confirms all of the following should be used in preference to this information.

Original Article:

Firstly, a bit of a disclaimer - the following are just my observations / opinions only and should not be used to plan any purchase or decision.  I also have no association with Microsoft or any reseller.

As part of a new role I've recently been looking into the costs associated with rolling out TFS 2010 and whilst Microsoft seem to be making their product structure / licensing easier if the confusion surrounding TFS is anything to go by then they've still got a little more to do.

There are two main ways to obtain TFS:

  1. As part of VS2010 + MSDN, or
  2. As a stand alone product.

As far as I can ascertain both include exactly the same version of TFS. Along the VS2010 MSDN subscription a developer receives 1 Client Access License (CAL) for use with TFS.  This allows that developer to connect to any instance of TFS. The stand-alone TFS product does not come with any CALs, these must be purchased separately. However it does come with 5 EULA exception licenses (see following blog article). This allows up to 5 users to connect to that instance of TFS without needing a CAL. Note: you can not coalesce / merge these EULA licensed exceptions in that if you purchased two instances of TFS you would have 2x5 EULA exception licenses - if you had 4 users on one instance and wanted to add a 6th user to the other instance you would need to purchase a full TFS CAL for that 6th user, you could not transfer the unused license from the 1st instance to the other server. Also worth noting that in this scenario the 6th user that has the CAL would be able to use the 1st instance without using up that 5th EULA exception license.

Therefore if only developers are going to be accessing / using TFS and they all have MSDN subscriptions you do not need to purchase the standalone product.  However, if you wish for product managers and testers to access / use TFS, then you should probably purchase the stand alone product too, then up to another 5 people can connect to that instance of TFS along with the developers.

Injecting HttpContextBase into an MVC Controller

It is a shame that when the ASP.NET MVC framework was released they did not think to build IoC support into the infrastructure. All the major components of the MVC engine appear to magically inherit instances of HttpContext and it’s related objects – which can cause no end of problems if you are trying to utilise Unit Testing and IoC. Reading around various articles on the subject just to get around this one problem requires the implementation of several different concepts and you are still left with a work around. The code below, along with the other links referenced in this article is my stab at resolving the issue. There’s probably nothing new here, but it does attempt to relate all the information needed to do this for Castle Windsor. The overview is that all controllers will need to inherit from a base controller, which takes an instance of HttpContext into it’s constructor. It then overrides the property HttpContext in the main controller class, supplying it’s own version instead.

Note: I’m still not sure if providing a default constructor that takes the base instance of HttpContext is a good idea as I’ve not had chance to fully test that functionality – it’s there as a concept that it could be done but from what I’ve learnt recently in injecting HttpContext into Action Filter Attributes I think it will cause problems. Use that constructor with caution!

public abstract class BaseController : Controller
{
public new HttpContextBase HttpContext { get; private set; }

protected BaseController()
{
this.HttpContext = base.HttpContext;
}

protected BaseController(HttpContextBase httpContext)
{
if(httpContext == null)
{
throw new ArgumentNullException("httpContext");
}
HttpContext = httpContext;
}
}

This partial class can then be inherited and used as shown below. Using this ControllerFactory the injection of the HttpContextBase instance will be handled by the IoC container.

public partial class RealWorldController : BaseController
{
public ReadWorldController(HttpContextBase httpContext)
: base(httpContext)
{
...
}
...
}

Now along with Mocking HttpContextBase your controllers should be easy to inject and unit test!

Controller Factory for Castle Windsor v2.5.0, with HttpContext Resolution/Injection

The code below defines an object that holds an instance of the Castle Windsor container and sets it up to handle all requests to resolve ASP.NET MVC Controllers.  It also includes code to inject instances of HttpRequest and HttpContextBase.  This means that you can define injected objects that contain references to HttpContextBase in their constructors (which can be your MVC controllers) and they will receive populated instances of these objects.  It is worth noting that if you reference an instance of HttpContext that has not been resolved / injected by your IoC container then there is a high likely hood that you will end up with two or more separate instances which will cause problems (see this article on Injecting into Action Filter Attributes that can suffer this issue).

Please feel free to use this code and let me know if you run into any issues or have recommendations on how it could be improved.

public class WindsorControllerFactory : DefaultControllerFactory
{
IWindsorContainer Container { get; set; }

public WindsorControllerFactory(IWindsorContainer container)
{
Container = container;
Container.Register(AllTypes.FromThisAssembly().BasedOn(typeof(IController)).Configure(c => c.LifeStyle.Is(LifestyleType.Transient)));

Container.Register(Component.For<HttpRequestBase>()
.LifeStyle.PerWebRequest
.UsingFactoryMethod(() => new HttpRequestWrapper(HttpContext.Current.Request)));

Container.Register(Component.For<HttpContextBase>()
.LifeStyle.PerWebRequest
.UsingFactoryMethod(() => new HttpContextWrapper(HttpContext.Current)));

Container.Register(Component.For<HttpContext>().LifeStyle.PerWebRequest.UsingFactoryMethod(() => HttpContext.Current));
}

protected override IController GetControllerInstance(RequestContext requestContext, Type controllerType)
{
return (IController)Container.Resolve(controllerType);
}

public override void ReleaseController(IController controller)
{
Container.Release(controller);
}
}

This can then just be simple called by adding the following to your global.asax code.  Note: This expects a “castle” section to be defined in your web.config to resolve non-controller references handled within the WindsorControllerFactory.

public static IWindsorContainer WindsorContainer { get; private set; }

protected void Application_Start()
{
WindsorContainer = new WindsorContainer(new XmlInterpreter(new ConfigResource("castle")));
ControllerBuilder.Current.SetControllerFactory(new WindsorControllerFactory(WindsorContainer));
...

IoC and MVC Action Filter Attributes

As I’ve previously mentioned anyone starting out with IoC and ASP.NET MVC quickly encounters problems injecting HttpContext and related classes into controllers, etc.  A similar issue surrounds Action Filter Attributes but is not limited to just HttpContext as objects inheriting from ActionFilterAttribute must contain a “parameter less” default constructor.  Without a “parameter less” constructor these objects can not be created when used to decorate a class or action declaration.  Also, the MVC engine is responsible for the creation of the attributes when they are decorating class/action declarations, completely by-passing the IoC container and any hope of using property injectors. 

The only way to gain access to the IoC container is to make sure that it is available through a static public object (such as global.asax) and reference the container directly in the filter constructor.  This is not ideal, but checking other articles on the web, there does not appear to be any better solution available.   It is also a good idea to provide a constructor that can be used for DI for Unit Testing the action filter in isolation.

It is also worth noting that if you control the creation/injection of HttpContextBase within your IoC container, then the version of HttpContext that is held in filterContext in “OnActionExecuting” will be a different instance to that supplied in the IoC container.  In this example I attempted to use filerContext.HttpContext.Response to write cookie values that were checked later.  This code failed because the Action Filter and IoC container worked against separate instances – I would write the cookie, but the subsequent read would not find it.  Also the Response object in the filter would always record that the client was not connected.

public class RequiresAuthenticationAttribute : ActionFilterAttribute
{
// As you can't inject directly into an action filter, this allows full control for
// unit testing, whilst providing auto creation for actual code.
private IAuthenticationCookie _authenticationCookie;

public RequiresAuthenticationAttribute()
{
_authenticationCookie = MvcApplication.WindsorContainer.Resolve<IAuthenticationCookie>();
}

public RequiresAuthenticationAttribute(IAuthenticationCookie authenticationCookie)
{
if (authenticationCookie == null) throw new ArgumentNullException("authenticationCookie");
_authenticationCookie = authenticationCookie;
}

public string Controller { get; set; }
public string Action { get; set; }

public override void OnActionExecuting(ActionExecutingContext filterContext)
{



del.icio.us Tags: ,,

Updating NServiceBus to use another version of Castle Windsor

Having recently taken a look at NServiceBus, the first obstacle that I encountered was that it was not compatible with the version of Castle Windsor which we were using.  Luckily due to the design of NServiceBus adding a new version of an IoC framework is relatively painless if you follow a couple of basic steps and follow the format laid out by the existing framework.  Whilst the NServiceBus documentation say what you need to do, it does not say why or how.  It probably took longer than it should for me to realise that I could easily download the source code and modify the existing Castle Windsor object builder instance for the version that we were using. 

As recommended within NServiceBus I created a new VS project called “NServiceBus.ObjectBuilder.CastleWindsor.2.5.0” adding a reference to the Castle binaries we would be using (as you may have guessed from the project name, that was v2.5.0).  The project only needed two classes defined (Windsor250ObjectBuilder & ConfigureWindsorBuilder) the code for which is listed below.   Please feel free to copy / modify this code as needed – if you find any obvious issues please let me know.  As I’m only just starting out with NServiceBus and Castle Windsor, I’m not sure if I will be able to assist much – but at the very least I can either update this article with a relevant note or improvement.

namespace NServiceBus.ObjectBuilder.CastleWindsor
{
    usingSystem;
    usingSystem.Collections.Generic;
    usingSystem.Linq;
    usingCastle.MicroKernel.Releasers;
    usingCommon;
    usingCastle.Windsor;
    usingCastle.MicroKernel;
    usingCastle.Core;
    usingCastle.MicroKernel.Registration;

    /// <summary>
    ///
Castle Windsor implementaton of IContainer.
  
/// </summary>
  
public classWindsor250ObjectBuilder: IContainer
  
{
        /// <summary>
        ///
The container itself.
      
/// </summary>
      
publicIWindsorContainer Container { get; set; }

        /// <summary>
        ///
Instantites the class with a new WindsorContainer setting the NoTrackingReleasePolicy.
      
/// </summary>
      
publicWindsor250ObjectBuilder()
        {
            Container = newWindsorContainer();
            Container.Kernel.ReleasePolicy = newNoTrackingReleasePolicy();
        }

        /// <summary>
        ///
Instantiates the class saving the given container.
      
/// </summary>
        /// <param name="container"></param>
      
publicWindsor250ObjectBuilder(IWindsorContainer container)
        {
            Container = container;
        }

        voidIContainer.Configure(TypeconcreteComponent, ComponentCallModelEnum callModel)
        {
            var handler = GetHandlerForType(concreteComponent);
            if(handler == null)
            {
                var lifestyle = GetLifestyleTypeFrom(callModel);

                var reg = Component.For(GetAllServiceTypesFor(concreteComponent)).ImplementedBy(concreteComponent);
                reg.LifeStyle.Is(lifestyle);

                Container.Kernel.Register(reg);
            }
        }

        voidIContainer.ConfigureProperty(Typecomponent, stringproperty, objectvalue)
        {
            var handler = GetHandlerForType(component);
            if(handler == null)
                throw newInvalidOperationException("Cannot configure property for a type which hadn't been configured yet. Please call 'Configure' first.");

            handler.AddCustomDependencyValue(property, value);
        }

        voidIContainer.RegisterSingleton(TypelookupType, objectinstance)
        {
            //Container.Kernel.AddComponentInstance(Guid.NewGuid().ToString(), lookupType, instance);
          
Container.Register(Component.For(lookupType).Named(Guid.NewGuid().ToString()).Instance(instance));
        }

        objectIContainer.Build(TypetypeToBuild)
        {
            try
          
{
                returnContainer.Resolve(typeToBuild);
            }
            catch(ComponentNotFoundException)
            {
                return null;
            }
        }

        IEnumerable<object> IContainer.BuildAll(TypetypeToBuild)
        {
            returnContainer.ResolveAll(typeToBuild).Cast<object>();
        }

        private staticLifestyleType GetLifestyleTypeFrom(ComponentCallModelEnum callModel)
        {
            switch(callModel)
            {
                caseComponentCallModelEnum.Singlecall: returnLifestyleType.Transient;
                caseComponentCallModelEnum.Singleton: returnLifestyleType.Singleton;
            }

            returnLifestyleType.Undefined;
        }

        private staticIEnumerable<Type> GetAllServiceTypesFor(Typet)
        {
            if(t == null)
                return newList<Type>();

            var result = newList<Type>(t.GetInterfaces()) { t };

            foreach (var interfaceType int.GetInterfaces())
                result.AddRange(GetAllServiceTypesFor(interfaceType));

            returnresult;
        }

        privateIHandler GetHandlerForType(TypeconcreteComponent)
        {
            returnContainer.Kernel.GetAssignableHandlers(typeof(object))
                .Where(h => h.ComponentModel.Implementation == concreteComponent)
                .FirstOrDefault();
        }
    }
}

namespace NServiceBus
{
using ObjectBuilder.Common.Config;
using Castle.Windsor;

using ObjectBuilder.CastleWindsor;

/// <summary>
///
Contains extension methods to NServiceBus.Configure.
/// </summary>
public static class ConfigureWindsorBuilder
{
/// <summary>
///
Use the Castle Windsor builder with the NoTrackingReleasePolicy.
/// </summary>
/// <param name="config"></param>
/// <returns></returns>
public static Configure CastleWindsor250Builder(this Configure config)
{
ConfigureCommon.With(config, new Windsor250ObjectBuilder());

return config;
}

/// <summary>
///
Use the Castle Windsor builder passing in a preconfigured container to be used by nServiceBus.
/// </summary>
/// <param name="config"></param>
/// <param name="container"></param>
/// <returns></returns>
public static Configure CastleWindsor250Builder(this Configure config, IWindsorContainer container)
{
ConfigureCommon.With(config, new Windsor250ObjectBuilder(container));

return config;
}
}
}


del.icio.us Tags: ,

Monday, 27 September 2010

Unit Testing: Selecting an Action from an MVC Controller

To fully test an MVC web site it is important to test (in isolation) the following:
  1. Behaviour of controller actions
  2. Behaviour of any custom action filters.
  3. The decoration of action filter attributes on controller actions.

To test the 3rd point, you must use reflection to select the desired action from the controller.  The following method takes an action name and a Tuple array of "Type" and "String". Used together this combination should be enough to isolate a desired action.  Note: an empty tuple array is used to define no input parameters, which a null tuple array specifies that this shouldn't be used to restrict the action (in this case the action name must be null)

  1. public MethodInfo SelectAction(Controller controller, string actionName, Tuple[] expectedParameters = null)
  2. {
  3.     var methods = controller.GetType().GetMethods(BindingFlags.Public | BindingFlags.Instance).Where(mi => mi.Name.Equals(actionName));
  4.     if (expectedParameters != null) methods = methods.Where(mi => CompareParameters(mi.GetParameters(), expectedParameters));
  5.     if (methods.Count() == 1) return methods.First();
  6.     var ex = new Exception((methods.Count() == 0) ? "No matching actions could be found on controller" : "More than one matching action could be found on controller"); ex.Data.Add("Controller", controller.GetType()); ex.Data.Add("Action", actionName); if (expectedParameters != null)
  7.     {
  8.         var parameterList = new StringBuilder(); foreach (var expectedParameter in expectedParameters)
  9.         {
  10.             if (parameterList.Length > 0) parameterList.Append(", "); parameterList.AppendFormat("{0} {1}", expectedParameter.Item1.Name, expectedParameter.Item2);
  11.         } ex.Data.Add("ParameterList", string.Format("({0}", parameterList.ToString()));
  12.     } ex.Data.Add("Matching Actions", methods.Count()); throw ex;
  13. }

This can then be called using the simple format below.  This code confirms that the controller has two search actions.  One taking no input parameters, whilst the other takes a bound input model.

  1. public MethodInfo SelectAction(Controller controller, string actionName, Tuple[] expectedParameters = null)
  2. {
  3.     MethodInfo action = null;
  4.     Assert.DoesNotThrow(() => action = SelectAction(new CustomerController((new MockHttpContext()).Object), "Search", new Tuple[] { }));
  5.     Assert.That(action.Name, Is.EqualTo("Search"));
  6.  
  7.  
  8.     Assert.DoesNotThrow(() => action = SelectAction(new CustomerController((new MockHttpContext()).Object), "Search", new[] { new Tuple(typeof(CustomerSearchInputModel), "inputModel") }));
  9.     Assert.That(action.Name, Is.EqualTo("Search"));
  10. }

In the next post I will cover how to use this to assert the decoration of action filter attributes.

del.icio.us Tags: ,

Sunday, 26 September 2010

Injecting AutoMapper with IoC

Update - 14th February 2016:
Looking in my blog stats, this continues to be one of my most popular articles, so it is most definitely worth an update. As of v4.2.0 Automapper has been updated to remove the static implementation. I've not had chance to play with the new version yet but I would imagine this version will now work with any IoC container you wish to use it with.


Original Article:
The main “Mapper” class of the AutoMapper framework is a static class which does make the initial testing and usage really easy but quickly causes problems if you’re using an Inversion of Control framework such as a Castle Windsor.  Also being static it is far harder to abstract the AutoMapper framework out of any unit tests using mocking frame such as MOQ.  As a final point for all these reasons using AutoMapper directly could cause problems in the future if it was decided to switch to another mapping framework.
The following code handles all of the above concerns / issues by wrapping the AutoMapper static instance up into a single bespoke mapping class.  At a first glance this would appear to be a step backwards as it requires each object to object mapping to be defined as a method in the interface and implementing class.   However for very little overhead or extra coding this comes with a benefit of abstracting the “how” of the mapping from the calling code.  It would be easy to add another mapping framework into the Mapper class below and then update individual methods to use that new mapping framework.  It also helps prevent any premature optimisations as the code can be deployed with new methods using mappings that provide the maximum automation and minimal manual configuration.  If performance issues then arise during testing or live usage, individual methods/mappings can be tackled to provide better performance – which for maximum throughput if the situation required it could result in the mapping framework(s) being by-passed completely and all initialisation of the target class performed directly within the mapping method.

using am = Automapper;

namespace Tools

{

public interface IMapper

{

CustomerEntity MapCustomer(CreateCustomerMsg msg);

CustomerEntity MapCustomer(UpdateCustomerMsg msg);
}


public class Mapper : IMapper

{

static Mapper()

{

// All creation code in the static constructor
am.Mapper.CreateMap<CreateCustomerMsg, CustomerEntity>();
am.Mapper.CreateMap<UpdateCustomerMsg, CustomerEntity> ();
}
public CustomerEntity MapCustomer(CreateCustomerMsg msg)
{
return am.Mapper.Map<CreateCustomerMsg,CustomerEntity>(msg);
}
public CustomerEntity MapCustomer(UpdateCustomerMsg msg);
{
return am.Mapper.Map<(UpdateCustomerMsg,CustomerEntity>(msg);
}
}
}




del.icio.us Tags: ,,

Playing with AutoMapper

In my current role we have come across a requirement to map message objects to more complex domain entities.  As anyone that has done this previously will quickly tell you, manually writing code to do this is very boring and repetitive.  With very limited time and resources, that effort could be better spent elsewhere on the project.  As it seemed very simple to set up and start using we’ve decided to use AutoMapper.

Configuring / Using
Setting up a basic mapper is as simple as:
Mapper.CreateMap<Source,Target>();
var target = Mapper.Map<Source,Target>(source);
The call to CreateMap only needs to be made once for the duration of the AppDomain, calls to Map are then made as needed.
As was as direct property name mapping several other more complex mappings are provided “out of the box”.  This includes flattening properties of child objects into named properties in the target object.   AutoMapper also provides several different methods to provide custom mappings either inline in the form:

Mapper.CreateMap<CalendarEvent, CalendarEventForm>()

.ForMember(dest => dest.EventDate, opt => opt.MapFrom(src => src.EventDate.Date))

.ForMember(dest => dest.EventHour, opt => opt.MapFrom(src => src.EventDate.Hour))

.ForMember(dest => dest.EventMinute, opt => opt.MapFrom(src => src.EventDate.Minute));
Or using custom formatting classes in the form:


  1. Mapper.CreateMap<string, DateTime>().ConvertUsing(new DateTimeTypeConverter());
  2. public class DateTimeTypeConverter : ITypeConverter<string, DateTime>
  3. {
  4.     public DateTime Convert(string source)
  5.     {
  6.         return System.Convert.ToDateTime(source);
  7.     }
  8. }


Performance
Initial tests have thrown up no surprises; the flexibility that Automapper provides does come with some overhead – but in most cases I’m sure that the performance is going to be fine for the project in question.  Being able to quickly map messages to domain objects and move on is going to improve the speed in which we can develop the project.  If problems are encountered once the code base is live, there is nothing to stop the areas in question being re-investigated and/or rolled back to manual mappings.


del.icio.us Tags:

Wednesday, 22 September 2010

Mocking HttpCookieCollection in HttpRequestBase

When unit testing ASP.NET MVC2 projects the issue of injecting HttpContext is quickly encountered.  There seem to be many different ways / recommendations for mocking HttpContextBase to improve the testability of controllers and their actions.  My investigations into that will probably be a separate blog post in the near future but for now I want to cover something that had me stuck for longer than it probably should have.  That is how to mock non abstract/interfaced classes within HttpRequestBase and HttpResponseBase – namely the HttpCookieCollection class.   The code sample below illustrates how it can be used within a mocked instance of HttpRequestBase.  Cookies can be added / modified within the unit test code prior to being passed into the code being tested.   After it’s been called, using a combination of MOQ’s Verify and NUnit’s Assert it is possible to check how many times the collection is accessed (but you have to include the set up calls) and that the relevant cookies have been added / set up.

 

MOQ / NUnit Code Sample
  1. namespace Tests
  2. {
  3.     using System.Web;
  4.     using Moq;
  5.     using NUnit.Framework;
  6.  
  7.     [TestFixture]
  8.     public class HttpRequestBaseTests
  9.     {
  10.         [Test]
  11.         public void Test()
  12.         {
  13.             var request = new Mock<HttpRequestBase>();
  14.             request.SetupGet(r => r.Cookies).Returns(new HttpCookieCollection());
  15.             request.Object.Cookies.Add(new HttpCookie("TestCookie", "TestValue"));
  16.             request.Object.Cookies.Add(new HttpCookie("AnotherCookie"));
  17.  
  18.             var instance = request.Object;
  19.             var cookie = instance.Cookies["TestCookie"];
  20.  
  21.             request.Verify(r => r.Cookies, Times.Exactly(3));  // Include the one expected reference, plus two setup calls.
  22.  
  23.             Assert.That(cookie, Is.Not.Null.And.InstanceOf(typeof(HttpCookie)));
  24.             Assert.That(cookie.Name, Is.Not.Null.And.InstanceOf(typeof(string)).And.EqualTo("TestCookie"));
  25.             Assert.That(cookie.Value, Is.Not.Null.And.InstanceOf(typeof(string)).And.EqualTo("TestValue"));
  26.         }
  27.     }
  28. }

Tuesday, 17 August 2010

Total WTF moment whilst reading Clean Code

I've just spent a useful and enjoyable day finally getting around to reading Clean Code, definitely a book to come back to time and again for reference, etc.  However given the subject matter the book contains I was amazed to find in the section about "magic numbers" a statement that "some constants are so easy to recognise that they don't always need a named constant".  Two of the examples given are the number of feet per mile and the number of work hours per day.   I'm not sure how many people that work with imperial units could easily state how many feet are in a mile.  If you are more used to using metric units then that figure has next to no meaning what so ever.  And at least in the UK working hours vary depending upon the industry, company and even department that you work in.  Whilst it doesn't detract from the content of the book I'm not sure those statements should have got past the proof reading / draft review stage.

Monday, 16 August 2010

Response to "Two different ways to create bad logic in unit tests"

Today I read Roy Osherove's blog post "Two Different ways to create bad logic in unit tests".  It's an interesting article that covers logic included in many unit tests, is it acceptable to compare a returned string value using a comparison value built via string concatenation?  It is perfectly possible that an issue introduced by the concatenation process is duplicated in both the code and the unit test - more often than not the logic has been cut / pasted from the code base into the unit test anyway.  

As an example directly from Roy's blog consider the following code:

   1: [TestMethod]



   2: public void SomeTest()



   3: {



   4:   string user = "user";



   5:   ComponentUnderTest cut = new ComponentUnderTest();



   6:   string result = cut.SayHello(user);



   7:   string expected = "hello " + user;



   8:   Assert.AreEqual(expected,result);



   9: }




Thinking this through, it makes perfect sense, the following could be a better test:





   1: [TestMethod]



   2: public void SomeTest()



   3: {



   4:    string user = "user";



   5:    ComponentUnderTest cut = new ComponentUnderTest();



   6:    string result = cut.SayHello(user);



   7:    string stringPrefix = "Hello ";



   8:    Assert.IsTrue(result.StartsWith(stringPrefix, result);



   9:    Assert.IsTrue(result.EndsWith(string.Format(" {0}", user), result);



  10:    Assert.AreEqual(stringPrefix.Length + user.Length, result.Length);



  11: }




Could this be the answer?  It certainly removes any possible duplicated logic that may hide issues in the string manulipation, but would it be workable in a more complex example?

Sunday, 15 August 2010

Project Euler – Problems 18 & 67

The challenge set by problem 18 was

By starting at the top of the triangle below and moving to adjacent numbers on the row below, the maximum total from top to bottom is 23.

3
7 4
2 4 6
8 5 9 3

That is, 3 + 7 + 4 + 9 = 23.

A 15 row triangle was then supplied for which the program must determine the corresponding maximum value taking a similar path through the data.  A foot note to the problem highlighted that due to the “limited” number of paths for this puzzle it could be solved using brute force determining the total for each path through the triangle and selecting the maximum total returned.  However a link to problem 67 was provided which exactly the same problem, but this time the data was for a hundred row triangle. For that problem a brute force attack could not be used as the number of possible routes meant that:

If you could check one trillion routes every second it would take over twenty billion years to check them all. There is an efficient algorithm to solve it. ;o)

In researching possible approaches I came across an Excel solution by Tushar Metha.  This really simplified the approach, turning the data upside down and starting on the last row, taking two adjacent cells (“c1” and “c2”) and their corresponding cell on the row below “c3” (remember triangle has been turned upside down).  The two routes for that subset were determined and the maximum value was placed in cell “c3”.  In the example, 5 is “c1”, 9 is “c2” and 4 is “c3”.  So “c1 + c3” = 9 whilst “c2 +c3” = 13, so the contents of c3 is replaced with 13.  This process is repeated for the entire row. 

5   9

  4

Once the entire row has processed you “move” down a row and repeat the process.  For problem 67, instead of taking 20 billion years it takes less than a second - so quite a nice time saving over the brute force approach.

Source code: VS2010 C# Solution

Problem Stats:

  • 23 out of 299 problems solved (2 more until first level!)
  • No Ranking (based on problems 275 to 299)
del.icio.us Tags: ,

Monday, 9 August 2010

Project Euler – Problem 54

The challenge set by Problem 54 was to determine the number poker games payer 1 won given a text file detailing 1,000 hands dealt to 2 players.  Given the logical nature of the rules, the solution was just a case of finding the best way to 1) implement the rules and 2) duplicate the rule hierarchy.  I quickly re-factored my first attempt that attempted to place the ruled based logic inside of the PokerHand class as I found whilst this made it easy to compare the matching combinations of a single hand it was much hard to compare two hands and determine the winning combination – this was made even harder when two hands contained the same winning combination and the next highest combination had to be selected from each to be compared.

At this point I simplified the PokerHand class to just being a container for an IEnumerable Card collection and some basic logic to confirm the hand was valid (i.e. contained 5 cards).  The logic to find the winning player was migrated into a PokerEngine class, which took two populated instances of PokerHand (player 1 and player 2).  After some basic validation, both hands are passed into a rule which checks to see which hand (if any) wins.  If neither hand wins or draw, then the next rule in hierarchy is checked, until either a matching hand is found, or the round is deemed a draw (as the documentation clearly stated that each hand had a clear winner this situation would result in an exception being thrown).  To help until testing and the re-factoring of the original code, each validation rule was still performed against a single hand, returning the outcome of the check.  The outcome of the checks performed against both hands were used to determine a winner.

The following code is the first logic check to find the winning hand, the first combination to check for is the Royal Flush as it beats any other.  If both hands contain a Royal Flush, a quick short short cut can be called to see which hand has the highest card

var winningPlayer = DetermineIfWinOnRoyalFlush(player1Hand, player2Hand);
if (winningPlayer != WinningHandEnum.None)
{
return winningPlayer;
}

The above code calls the function shown below.  This function utilises the separate rule hander that checks hands in isolation.  This means each method on HandValidationRules can be unit tested in isolation of each other.

  1. private static WinningHandEnum DetermineIfWinOnRoyalFlush(PokerHand player1Hand, PokerHand player2Hand)
  2. {
  3.     var player1Pair = HandValidationRules.RoyalFlush(player1Hand);
  4.     var player2Pair = HandValidationRules.RoyalFlush(player2Hand);
  5.  
  6.     if (player1Pair == null && player2Pair == null)
  7.         return WinningHandEnum.None;
  8.  
  9.     if (player1Pair == null)
  10.         return WinningHandEnum.Player2;
  11.     if (player2Pair == null)
  12.         return WinningHandEnum.Player1;
  13.  
  14.     return WinningHandEnum.None;
  15. }

The validation rules for RoyalFlush are shown below:

internal static RoyalFlushWinningCombination RoyalFlush(PokerHand pokerHander)
{
if (!pokerHander.ValidHand)
{
return null;
}
if (ValueOfHighestCard(pokerHander) != 14)
{
return null;
}
if (!AllOfSameSuite(pokerHander) || !ConsecutiveValues(pokerHander))
{
return null;
}
return new RoyalFlushWinningCombination();
}

$(Document) is null or not an object

I've just spent a very painful hour or so debugging a jQuery issue that turned out to be a self inflicted problem!  The site I was working on had been working perfectly all morning, then a particular page began to fail on a refresh (browser F5).  It was possible to browse to the page and it displayed correctly, but press F5 to refresh the page and it failed on the statement below.  Checking the contents of document showed it to be populated, but $(document) failed with an error saying it was either null or not an object.

$(document).ready(function() {
   AttachCloseEvents();
});

It just didn't make sense, going to the page everything would work as expected but refreshing the very same page caused it to fail - it didn't matter if it was the start up page of the visual studio project or not.  In one of the many attempts to isolate / identify the problem I cleared out the browser cache.  Amusingly this now broke the entire site - the error occurred every time the page was viewed.

............It was at this point that I noticed I'd accidently moved the projects localised copy of jQuery.  The requests to the page that had worked originally had been using a browser cached copy of the file - it was the refresh of the page that was acting correctly.

Tuesday, 27 July 2010

WCF Hosted on a Network Share

Whilst working on a proof of concept WCF service I ran into an interesting security “zone” issue which caused me a few headaches for an afternoon.  I was moving some code out of a prototype WinForm application into a WCF service to prove and demonstrate the next phase of the infrastructure.  The functionality in question reference some custom ‘C’ libraries which had already been proven to run when referenced locally within the Windows application.  However moving the same the code and therefore the reference to the ‘C’ libraries into a WCF service resulted in most calls to the library throwing a “Security Exception”.  Stepping through the debugger, the issue was clear enough, calling “unsafe” code requires Full Trust but even though everything was running locally the WCF service was showing as “Internet” zone and triggering the security exception.   This had me confused for quite a while, I wasn’t hosting the WCF service within IIS (see previous post, no admin rights!), so my only option was the WCFHost application.  No searches on Google could find anything related, everything seemed to indicate that in this situation the WCF application should run in the same security zone/context as the calling application - which I had verified was running in Full Trust mode.

My final attempt before calling it a day and going home was to try and register the DLL within the GAC and hope that solved the trust issue.  It was only when clicking on “Add an assembly to the assembly cache” I noticed that only two drive letters were listed.  The solution I was developing was located on the mapped network drive that didn’t appear in the list.  As a quick test I moved the solution onto one of the non mapped drives, recompiled the application and tried again.  This time everything worked – developing on a mapped drive had resulted in WCFHost dropping the security level that the code was running under!

del.icio.us Tags:

Is .NET development possible without Admin Rights?

I guess the obvious answer is “yes” as there is always notepad and command line compilation, but if you want to use Visual Studio or do anything with IIS, Message Queues, etc. then the answer quickly becomes a frustrating no!  This hasn’t usually been a problem, but having just started in a new job which is just starting out in .NET I’m finding what a frustrating experience not having admin rights can be.  Visual Studio (2008) has been installed so that’s one hurdle out of the way, but a trial install of Resharper needs admin rights, as does NUnit. 

I see MS are trying to make things better with a non-admin version of IIS7 (called IIS Express) around the corner which needs VS2010 (another thing on the shopping list).  But until the Visual Studio toolset can be installed using a non-admin account, it’s going to continue to be a frustrating experience!

Wednesday, 7 July 2010

Producing HTML with less Typing

A helpful little tool to reduce the amount of typing needed to produce HTML, with a more detailed introduction on Smashing Magazine.  Also include a Resharper 5.0 plug-in.