Monday, 31 December 2012

Looking forward to 2013

First and foremost, the biggest change in 2013 is that I will be responsible for and managing the development team. To help the team with the challenges we will face in 2013 I will need to stay hand on too, so that should make time management critical just to make sure I can personally fit it all in. This will probably be the biggest challenge I have faced in the last few years and something that I'm looking forward to getting started with.

Another change for 2013 will be refining our agile process. For the 2nd half of 2012 we followed a scrum methodology, successfully developing and deploying an MVC replacement for an old ASP legacy application. Feedback from the technical teams resulted in us modifying the process, removing task hour estimation from the sprint planning session with no measurable negative effect. Additional feedback from the business indicated that sprints were causing some friction when defining stories and planning releases. The 6-9 month development cycles of waterfall just don't work for most software applications, but at the same time, the artificial barriers that an 'x' week sprint causes can confuse the business. As we'd come to similar conclusions in my previous role, we are going to try to develop a "Kanban" methodology that will work for us.

  • We want to keep the short and frequent feedback loops and engagement with the business.
  • We also want to keep most stories as small as they can possibly be (days rather than weeks), but for some functionality we just don't want to be forced to break up a story just to make it fit within a sprint (but I must stress that we hope these to be the exception, otherwise it is just waterfall by the back door).
  • We also want to look into releasing at the end of every story so the business can get the value of development work as quickly as possible.

To help the business adjust to the harsh economic climate that we still face, the biggest challenge is going to be making sure that every single bit of effort really counts! This really does include everything, we need to improve the deployment process of all our application suite, making sure we really can live continuous integration / deployment. All development work needs to address both existing bugs / performance issues whilst providing much needed new functionality. As a part of this we should get the chance to learn Puppet, refine our TeamCity skills, potentially look at a DVCS and most importantly have fun developing!

Review of 2012

2012 has been a full-on year with lots of change. I started the year in my previous role, preparing for a transition into a newly created role of "Solution Architect"; moving away from both day to day coding and purely concentrating on .NET applications / systems. It sounded a really interesting challenge but another opportunity presented itself working for my current company in another newly created role of "Technical Team Lead". It was a hands-on development role, leading a team of 3 developers bringing a large business critical application in-house and helping to roll out scrum and other processes (such as TDD/BDD, Continuous Integration, etc.) to the business and team.

The very first challenge was the knowledge transfer sessions for the "out-sourced" application, which had been designed and developed by one company and then the maintenance passed onto a second company. It's probably fair to say that the code has grown organically rather than being designed and has undergone several architectural changes without completing or replacing the previous model(s). As a result it contains both direct SQL access via ADO.NET and at least 2 different ORM implementations. Many other areas of the code based might contain a mixture of design patterns alongside some large "procedural based" class methods. There were no automated tests of any form, and most documentation had either been lost or was now hopelessly out of date! Having just had the opportunity of working in a completely green field system in my last role, getting the chance to apply that knowledge in a large, deployed, business critical brownfield system seemed like a nice challenge (I have spent much time wondering if I was mad!)

With such an interestingly complex system we had our most success videoing the hand over sessions, recording what didn't work as much as what did work as this both gave us the instant knowledge and the ability to go back after the event to refresh our memories. This also meant that those sessions are available as a record for new members joining the team. The very first win was the ability to deploy and develop all aspects of the system locally, with instruction guides that could be followed by the entire team and worked every time on every machine (no small achievement!). We are now preparing for the production deployment of our first "in-house" developed release of this application including a few small pieces of new functionality but mostly containing massively increased logging to help track down issues reported by the business in the production environment.

As well as taking over responsibility for the main application code base, we have replaced an existing ASP system with an MVC application developed in sprints using Scrum, TDD and TeamCity Continuous Integration - a massive learning curve for the team which they handled extremely well. This system has had two functionality deployments so far, with a third currently in test. This is a big improvement over the current norm of deployments every 3 or 6 months but there is still a little way to go before we can look to release at least every sprint.

I think that on the whole the team can be really happy as we have accomplished everything that was promised in 2012, just not necessarily everything that was hoped for - but we have to leave something to improve upon in 2013!

Thursday, 13 December 2012

VS2012 does not support Silverlight 3.0

Not sure that this post really needs anything more than the title. If you are upgrading a Silverlight project from VS2008 or VS2010 be aware that VS2012 only supports versions 4 & 5 of Silverlight. If you need to update the solution / projects to use them in VS2012 I'd recommend upgrading to at least Silverlight 4 first in your existing Visual Studio and then once it's working upgrade the solution/projects to VS2012 so you're only tackling one set of issues at a time.

Saturday, 8 December 2012

Upgrading a TeamCity build from VS2008 to VS2012 and using NuGet

We've recently upgraded to Visual Studio 2012 from VS2008 and switched over to using NuGet rather than direct project references for our third party tools. Everything worked as planned until we checked the solution into source control and the personal build for TeamCity kicked off. Almost straight away the build fell over with the following error message:

D:\TeamCity\buildAgent\work\e6ae794aab32547b\.nuget\nuget.targets(102, 9):
error MSB4067: The element <ParameterGroup> beneath
element <UsingTask> is unrecognized.
Project BJ.Core.sln failed.

Our projects were still targeting .NET 3.5 but to fix the problem we needed to update the visual studio version in the build configuration

Note: we were are using Visual Studio 2012, but our Team City server is currently hosted on a 2003 Server O/S instance so we must select the VS2010 option in our build configuration (VS2012 option only works on 2008 Server and higher due to .NET 4.5 limitation).

Can't target .NET 3.5 after upgrading a Visual Studio 2008 solution to 2012

I ran into an interesting problem today when upgrading a visual studio 2008 project to visual studio 2012, whilst trying to leave the targeted framework to .NET 3.5. Each time I tried to open the solution all my test projects automatically upgraded to .NET 4.0 regardless of what I did. It was impossible to downgrade the project using either the project property page or manually editing the project file. I'd make the change and then reload the project, a project conversion report would be shown and the project was back to targeting 4.0 again.

After a little more digging around I noticed that it was only my test projects that were doing this, all the other class libraries, etc. were perfectly happy targeting 3.5. After a little more experimentation I isolated this to the project type guids section, if I removed this from the project definition then I could re-target my unit test project at v3.5 and everything was happy. Note: As the linked blog post indicates, you might lose the ability to add new tests directly from the "Add New" menu, but it would appear you can either have one or the other, but not both!

Saturday, 1 December 2012

Agile Delivery Experiences

I'm now in my second role in which I've had chance to introduce agile working practices to the team. In both roles the projects and applications developed under scrum have been successfully shipped and accepted by the business. The success of the deliveries has been measured by:

  • Functionality: The early visibility the business gained through the end of sprint demonstrations made sure that all the functionality the team developed stayed on track and provided exactly what the business wanted. There were no nasty surprises once the final code was shipped!
  • On Time: It is generally agreed that you can only have two of the following: quality, functionality and/or a "business set" ship date. The hardest lesson for a business to learn is that if it wants a set amount of functionality by a set ship date, then the only thing that can be modified by the team to meet the expectations is quality. Luckily in both instances everyone agreed that quality shouldn't be compromised and the business knew the functionality it needed, so the ship date was determined from our velocity and the story points for the desired functionality. In a couple of instances we didn't complete everything the team had committed to for a sprint, but the business was happy, because whilst the delivery did slip a sprint, they were confident it was within the team's ability to meet the goal.
  • Bug Free: Experience has shown that shipping the desired functionality on an agreed date is rare enough but after launch the applications have experienced an extremely low defect rate - I've never experienced this prior to switching to an agile development methodology. A TDD approach was used in both roles, but I really believe the benefit of TDD is in ongoing maintenance whilst the initial low defect count can be attributed to the scrum process.

I'm now looking forward to my next challenge/project to apply everything we've learnt in the past projects; hopefully with the same successes.

Do "Task Hours" add anything in Scrum (Agile)?

What do task hours add to the overall process in scrum?

This was a question that has arisen from all team members in both instances that I've helped teams switch over to scrum. The benefits of artifacts like the comparative story point estimation, the 2 week sprints, stand-ups and the end of sprint demo have been self evident to the team, but as one I think every team member has expressed dismay when it comes to task planning and estimating each task in hours. Left unchecked there is a natural tendency for people to actually begin to dread the start of each sprint purely due to the task planning session.

In my current role we've been lucky to investigate this further as a team.

The team sat down to discuss the problems it was experiencing with estimating tasks in hours and the following common themes appeared:

  • It is hard: Maybe it shouldn't be, but time estimation is hard! Story points are comparative and abstracted making them easier to determine, but time estimate is generally expected to be pretty exact and accurate. There is no getting away from the fact that time estimation is just plain difficult (and not enjoyable) for most people!
  • It takes too long: The discussion around how long a task will take takes time, each story might have 5+ tasks; so it could end up taking an hour just to task (and time estimate) a single story! It might be easy to agree in seconds that we need a task to add new functionality to a service, but a discussion to estimate the duration of the task will take several minutes, if not longer.
  • Outcome of one task affects the duration of a subsequent task: If you have two tasks, one to create / update Gherkin and another to create/update the bindings. How long the second task takes is directly proportional to how many features / scenarios change or are created in the first task. This is nigh on impossible to estimate at the task planning stage.
  • It doesn't involve all the team: When the testers are discussing an estimate for the testing task, the developers are under-utilised and vice versa. We tried having two task estimate discussions going on at the same time and that didn't work very well either.
  • Working to hours, rather than done: We all seemed to realise that there was a natural tendency to sometimes work to the hours rather than the definition of done! If you knew you only had 30 minutes left on the task estimate it is very hard to add those couple of extra tests that would really trap those edge cases - it's much easier to finish the task on time and put off those tests to another task that might finish earlier than estimated. This is re-enforced by the positive comments that generally arise from the "actual" hours line on the sprint burn down graph following the estimate line.
  • It ignores individual ability: A team is made up of individuals of varying abilities and knowledge. I might estimate a task to take an hour because I worked on something similar last sprint, or am familiar with that functionality, whilst somebody else with different experience will estimate the same task as 3 hours. We are both right in our own estimations, but we shouldn't be allocating tasks at this stage, so which estimate should you use?
  • Compromise reduces value of the process: Leading on from the previous point, if the individual time estimates for a task range from 1 hour to 3 hours, you are looking at a compromise - this tends to work for story points but doesn't lend itself to task hour estimates as it results in a very wobbly line in the burn down graph.
  • Discussing the difference for longer than the difference itself: Continuing this theme; the difference between the largest/smallest estimates may be an hour. If you have 7 people around the table, the moment you spend longer than 8-9 minutes discussing the task (it can happen) you've spent longer (in man hours) discussing it than the difference you are discussing!

Sitting down with the product owner, it was story points / velocity which gave them the confidence that we could deliver what we had committed to.

Sitting down with the scrum master, they understood the concerns of the team and in light of comments made by the product owner, agreed to run a trial sprint where stories would be tasked, but no hours would be assigned. A commitment was made by the team that we wouldn't slip back into a waterfall approach and that the ability to flag potential issues during they daily stand ups shouldn't suffer.

We are now in sprint 3 of our trial, and I'm happy to say that the removal of task hour estimation hasn't had any measurable negative impact. Personally, having worked in an agile environment for two years, it did seem strange to be missing the sprint burn-down graph, but now we are further into the trial it has just proven to us that it was just giving us false confidence without providing any actual benefit! Our velocity and ability to commit to / deliver stories hasn't seen any negative impact, whilst we've trimmed 3-4 hours off the task planning meeting every 2 weeks (14-28 man hours!) and everyone starts the sprint feeling ready for it, rather than worn down by the longer meetings that were needed for estimating the task hours!

If you and your team have reached the point where you now dread the start of new sprints purely due to task planning session, why not give dropping the "hours" estimating part of task planning a go and let me know how it worked out for you.

Wednesday, 21 November 2012

Website Update

It's been over 2 years in it's current form and I really must get around to updating my website. As you can see, I went for a really retro "early days of the web" approach; possibly even with the caveat that it's best viewed in a text only browser. So moving on, time for an update - I've not drawn up any designs, so that should be interesting but the site will probably just be an aggregation of me on the web. Think I'm going to write the first version in .NET, using MVC4 and Razor and whilst I'm not sure that there will be much demand, I'll host the code on my github account So watch this space for updates!

Tuesday, 20 November 2012

Add NuGet Reference in Resharper

Today Jetbrains announced on their blog that they've released a resharper nuget package that will obtain project references via nuget, rather than making direct references to the locally installed code.

As we are currently in the early stages of transitioning from VS2008 to VS2012 we aren't ready to start using NuGet(*) but I think I'll be grabbing this plugin for when we do.

(*) In my last role we were using VS2010 and making heavy use of NuGet so can't wait to get back into being able to use it again!

Saturday, 10 November 2012

Project Euler

Over the past couple of years I keep on coming back to Project Euler, it's nice when you just want a quick challenge, or to try your maths skills. It's highlighted how rusty some of my concepts were, and I've learnt a whole load of new ones along the way. I really like the way that the example problem given is quite quick to determine using brute force, but the desired answer always needs a more considered approach.

I've just completed problem 26, to determine value of d (1/d) where d < 1,000 and resulted in the largest recurring cycle of the decimal fraction. My first (working) solution completed the task in 2.8 seconds, well within the proposed ideal time frame. What's really helpful is then reading through the forum of previous solutions you can pick up some ideas that start to bring the entire process down to under a second!

You can see my progress in the graphic on the right hand side, or can check my progress on the project euler site.

Friday, 9 November 2012

To code review or not to code review?

There are many articles on the web about how to do good code reviews, with mostly all of them either discussing their merits or how to obtain maximum value from them.

In a new role I've recently helped introduce and roll out the agile process and to start with as part of each story we always added a task to "code review" all work. We made sure that differences from source control were used to make sure only the changes made were reviewed, made sure that everyone had chance to partake in reviews so it wasn't just one or two people providing all the feedback on the other team members work. It worked well, the reviews were productive and provided really good feedback for everyone.

But there was something about them that didn't feel right for the entire team, but it was the sprint planning process of tasking the stories that gave us the answer to what was bothering us. For every story we added the code review task to the END of the story, which of course by it's very nature was when all the development had been done! We analysed most of the feedback that came out of the review process and realised that, similar to the wedding singer, that information would have been useful to us "yesterday".

So as a little experiment we added a task right at the beginning of the story that involved at least 2 developers, sometimes a tester and for more complex stories, the entire team. Outside of the task planning session the developer(s) assigned the task spend 30-60 minutes detailing the approach they intend to take to solve the story - the other participants in the process get to question the approach and by the end of the discussion we've normally ironed out all the bumps. During these experimental stories we kept the code review task, but the initial discussion task along with taking a TDD approach coding against Gherkin provided by the test team we quickly identified that the code reviews had become redundant.

We've now taken the decision to remove the code review task completely and haven't noticed any decrease in code quality or maintainability. The number of issues reported in the produced code has stayed pretty constant, but we're no longer have to spend any time at the end of the story reworking any code as a result of review feedback!

Thursday, 8 November 2012

Reducing code noise for argument checking / assignment

Leading on from the previous blog post on reducing code noise when checking for null parameters, these help functions can be utilised to further reduce code noise that generally occurs in class constructors. Most constructors are purely made up of the following duplicated checking / assignment code.

public class MyClass
{
private readonly object var1;
private readonly object var1;
 
public MyClass(object arg1, object arg2)
{
if (arg1 == null)
{
throw new ArgumentNullException("arg1");
}
 
if (arg2 == null)
{
throw new ArgumentNullException("arg2");
}
 
this.var1 = arg1;
this.var2 = arg2;
}

Again putting together a small helper function, making use of the previous Throw class can remove 90% of the previous argument checking / assignment code resulting in:

public class MyClass
{
private readonly object var1;
private readonly object var1;
 
public MyClass(object arg1, object arg2)
{
this.var1 = ReturnParameter.OrThrowIfNull(arg1, "arg1");
this.var2 = ReturnParameter.OrThrowIfNull(arg2, "arg2");
}

Get the code from GitHub.

Reducing code noise when checking for null parameters

The problem with writing defensive fail early code, it doesn't take long before methods are 70% argument checking, 30% logic:

public void Method(object arg1, object arg2)
{
if (arg1 == null)
{
throw new ArgumentNullException("arg1");
}
 
if (arg2 == null)
{
throw new ArgumentNullException("arg2");
}
 
...
...
...
}

Code contracts can be a great way to move code out of the method body, but how can you clean up the code just through traditional refactoring? For the last few projects I've worked on, I've been using a small helper class that abstracts all the conditional code out, leaving just a single line call for each check you need.

public void Method(object arg1, object arg2)
{
Throw.IfNull(arg1, "arg1");
Throw.IfNull(arg2, "arg2");
 
...
...
...
}

String parameter checking can wrap up even more conditional logic. The following code throws an ArgumentNullException if the string is null, or just an ArgumentException if it is empty or whitespace.

Throw.IfNullEmptyOrWhitespace(stringValue, "stringValue");

Get the code from GitHub.

Wednesday, 24 October 2012

Blogs as an indication of job satisfaction?

When was the last time you updated your blog?

Have you done anything lately that was worth blogging about?

These are all questions that have been rattling around my head recently. I've experienced it myself, I've seen it via my colleagues. When people are working on interesting stuff, focused on solving clearly defined business goals, their blogs are full of detail and insightful to read! If the business is losing or has lost direction, blogs become an attempt to stay focused and continue honing skills but slowly over time the drain of the working day drags the blog content down to just notes or, worse still, silence.

I'm beginning to see the worth of a blog as a litmus test on job satisfaction - a stagnating or random blog probably indicates a day job that is doing something similar! Many posts (regardless of the stats) probably indicates a full on and enjoyable day job focused on solving tangible business goals

There is probably a good argument that a job shouldn't define you and that you can exist out of work, but if you are a developer and you're spending at least 8 hours per day, 5 days a week doing something that is not progressing your skills and you're not enjoying, then you probably need to question what you are doing! (and the first person that you should question is your boss!)

Monday, 8 October 2012

Project Euler #24

I'm trying to start up on project Euler again, one thing I do like is that it highlights how poor some of my maths knowledge actually is. Neither school or college covered many of these algorithms; which is bit of a surprise given that I did a four year mechanical apprenticeship with applied mathematics.....still never too late to learn

So I (re)started on problem 24 which read:

A permutation is an ordered arrangement of objects. For example, 3124 is one possible permutation of the digits 1, 2, 3 and 4. If all of the permutations are listed numerically or alphabetically, we call it lexicographic order. The lexicographic permutations of 0, 1 and 2 are:

012 021 102 120 201 210

What is the millionth lexicographic permutation of the digits 0, 1, 2, 3, 4, 5, 6, 7, 8 and 9?

I usually start with a brute force attack on the problem, using TDD and the supplied example to verify the output. This worked nicely for the given example, but even on modern computing power I couldn't get it to finish in a reasonable time to determine the required answer - it was taking much too long!

Resorting to Google for "fast ways to calculate lexicographic permutations" I found a great solution and explanation on www.mathblog.dk. On previous problems I've tried to stay away from exact answers, instead I learnt about the sieve of eratosthenes when trying to find ways to determine the 'nth' prime number and then implementing it myself, but in this case I decided to take their answer and plug it into my code. Having used TDD the whole way through I quickly extracted an interface off my brute force attempt and created a new implementation with this code, dropping that into my unit tests proved it would work and provided the answer in under 3ms.

Now 27 problems "solved" and counting, hopefully I'll find a problem that will allow me to reuse the algorithm from this problem so I can learn bout it some more.

Thursday, 4 October 2012

VS2012/.NET4.5 and Server 2003

Just a word of warning:

Windows NT and Windows Server 2003 do not support .NET 4.5 - you can not install the framework onto these systems.

So if you are building in visual studio 2012 and have to support Server 2003 (or earlier) then you must remember to target .NET4.0 (or lower) in your project(s) otherwise you will not be able to run them on the target system. That would be a really bad thing to find at the end of the project; another good reason to implement continuous integration/deployment into any project from the outset (and deploy to an instance/server that represents the target o/s)

Wednesday, 5 September 2012

Visual Studio 2012

One thing I've found over time is you can never find licensing details for VS and SQL products when you need them, so here's the link for VS2012 Licensing details

I quite like what Microsoft are doing with these white papers, the use case examples are really useful and answer most usage questions.

Thursday, 2 August 2012

How to set the background colour of the paper in Raphael

I've started playing with the JavaScript SVG library RaphaelJS. It looks a really nice library but the very first hurdle I came across was how to set the background colour of the paper. There didn't seem to be any help in the documentation and trying 'paper.attr("fill", "#f00");' resulted in the error Uncaught TypeError: Object #<a> has no method 'attr'. A less than optimal solution might be to create a rectangle on the paper that is the same size as the paper and fill that with the desired colour. But then all objects would need to sit on top of that element and it just feels messy.

Looking for a different approach I fired up developer tools in the browser and inspected the HTML, it was at this point I realised the solution was probably pretty simple. Using a style sheet and setting a CSS background colour for the <SVG> tag did exactly what I wanted; it sets the background colour nicely and doesn't need any additional elements.

Monday, 23 July 2012

Breaking JavaScript code down into components

Following on from my original post of learning how to put JavaScript together it's been a really productive week. Have I managed to write JavaScript that's easy to test? No, but I've wrapped up some reasonably complex logic into a component that can have some of it's functionality tested!

At the moment I've settled on QUnit as my JS unit testing framework and have a couple of tests verifying the default values of exposed properties - these are currently exposed as functions as I've wrapped up the variables in a closure to make them read only. Not sure if JavaScript even has the concept of a read only property in a form similar to C#. When calling the component in test conditions I've also been overwriting functions on the injected underscore library that failed due to a null/undefined reference - so not really mocking yet. The first issue I ran into was that I was pre-compiling the templates and these were defined inside script blocks of the calling ASP.NET view. In C# code you don't verify the view engine running when unit testing calls to the controller actions, so that indicates having a call to the template inside the main component is probably a suitable refactor. Applying this logic it has become apparent that my biggest problem is that my current JavaScript components are just one big black box. Using JQuery, Underscore and JSON/templating has made things cleaner but not really that much more testable!

The next step will be learning how to break the large big black boxes down into more focused components that can be easily tested in isolation to each other; and mocked/replaced when used inside of the calling component. This is much better and already feels closer to the comparable process in .NET. The code I am currently working on has a dialog box, which can be updated - that can be split out into it's own component. That includes initialisation, display updates and some button context functionality (once the processing is complete the text on the button changes from "Cancel" to "Close" but the functionality stays the same. These things become easier to test and wrap up a few of the JQuery/Templating calls quite nicely (making them easier to "Mock"). Another area of functionality which could be split out into it's own component is the AJAX calls. Similarly the "timer" based eventing system could be wrapped up as I noticed that there were several "clearInterval(...)" calls spread through out the code

After all the above refactors have been completed I'm hoping to end up with a single "controller" type object that orchestrates all the newly created components. This should make it far easier to test this controller object and possibly look to finally extract some of the business logic into smaller components too.

It seems strange to say, but it would appear right now that the biggest thing that was holding my JavaScript coding back was a fear of creating too many objects, preferring a single component that tried to contain and wrap everything up.....something that I left behind in the C# world many years ago! This was something of a surprise, up until recently I thought my biggest stumbling block was a lack of knowledge / understanding of JavaScript itself!

Thursday, 19 July 2012

Create & inserting GUIDs in Visual Studio

If you're using WIX and you're adding components to the product file by hand then you've probably found yourself creating and cutting/pasting a lot of GUIDs which can be a real pain and productivity killer. There is the "Create GUID" menu option under tools but that still requires calling and then cutting / pasting. The other day one of other members on our team came up with these steps to create a macro that will create and insert a new GUID and can be bound to a key board short cut.

  1. Go to Tools – Macros – Macro Explorer and paste the following into a module:
  2. Sub InsertGuid()
    Dim objTextSelection As TextSelection
    objTextSelection = CType(DTE.ActiveDocument.Selection(), EnvDTE.TextSelection)
    objTextSelection.Text = System.Guid.NewGuid.ToString("D").ToUpperInvariant
    End Sub

  3. Then assign it a keyboard shortcut through Tools – Environment/Keyboard, you can generate a GUID in-place with a simple keyboard shortcut.

No more copy/paste

Tuesday, 17 July 2012

Learning how to put JavaScript together properly

After many years of trial and error / practise I feel fairly happy that I can write clean, testable .NET / C# code, which is free of any external dependencies which traditionally make unit testing harder. It's fair to say that I'm convert to TDD; having implemented it on two separate commercial projects and seen the benefits of the improved maintainability of the code base over time.

But what about Javascript? Thanks to Selenium and Specflow the functionality of those websites is pretty well tested and by default that includes any JavaScript that may be referenced in the UI tests that are run. But that's not the same as having clean testable JavaScript. Those websites have typically included a mixture of JavaScript defined in-line as well as held separately within JS files. Most functionality has been provided by individual functions attached to events and any communication introduced by coupling of functions and/or attaching attributes to FORM elements. AJAX calls and DOM updates are mixed within the application logic making isolated testing pretty much impossible, basically everything we've spent the past years learning how to avoid in our .NET code bases.

But why does this happen? Experience and reading around on the web has highlighted that the above scenario is probably still the norm, rather than the exception. Is it really that difficult to write testable JavaScript? Is JavaScript such a difficult language to learn, or does it lend itself to writing messy code? I've met some very good .NET developers who openly admit that their JavaScript code doesn't live up to their own standards.

As part of the project that I'm now working on, I decided that I'd make sure that this time would be different. I wouldn't settle for writing inline JavaScript, nor would I settle for a mishmash of unrelated functions in the global namespace all somehow working together (and when they didn't I wouldn't just use alert's and console writes to debug what was happening!). This time I was going to write testable, modular JavaScript that was easy to unit test.

As part of this experiment I'll try keeping this blog updated with our findings, including patterns and practises we find that help. An initial search of the web came up with this article (by Ben Cherry) which details his findings when he joined Twitter: Writing testable Javascript. Interestingly this article highlights a few modifications people might want to consider to some currently recommended Javascript best practises to make your code more testable, namely staying away from singletons and not enclosing too much business logic - both valid suggestions that we stay away from in our .NET code for exactly the same reasons!

Friday, 20 April 2012

Handling the orientation of screens in web pages

As part of responsive design a web page should react to the device it is being displayed on and make the most of the available screen real estate accordingly. With the advent of HTML5 & CSS3 the media tag for stylesheet links has been extended to cope with minimum screen heights and widths. It has also been extended to include "orientation", the following header block correctly loads/references the relevant stylesheet for the orientation of the website you are viewing.

<link rel="stylesheet" media="all and (orientation:portrait)" href="portrait.css">
<link rel="stylesheet" media="all and (orientation:landscape)" href="landscape.css">

This is functionality is live & dynamic; changing the orientation of an IPad whilst viewing a page that implements the above CSS will correctly change the CSS references depending upon whether you are now viewing the page in landscape or portrait. This also works on LCD monitors; even to the point of moving a web page between a dual screen desktop when one monitor is in landscape and the other portrait.

When working with browser windows that are not maximised this functionality even takes into account the width of the browser vs it's height. Dragging the size of the window, at an unknown preset ratio of browser window height vs width the loaded CSS will switch from portrait to landscape or vice versa.

Thanks to Smashing Magazine for the tip.

Saturday, 25 February 2012

Unit Testing Workflow Code Activities - Part 1

When I first started looking into Windows Workflow one of the first things that I liked about it was how it separated responsibilities. The workflow was responsible for handling the procedural logic with all it's conditional statements, etc. Whilst individual code activities could be written to handle the business logic processing; created in small easily re-usable components. To try and realise my original perception this series of blog posts will cover the unit testing of bespoke code activities; broken down into:

  • Part One: Unit testing a code activity with a (generic) cast return type (this post)
  • Part Two: Unit testing a code activity that assigns it's (multiple) output to "OutArguments" (Not yet written)

So to make a start consider the following really basic code activity; it expects an InArgument<string> of "Input" and returns a string containing the processed output; in this case a reverse copy of the value held in "Input".

namespace ExampleCode.Workflow
{
using System.Activities;
using System.Linq;
 
public class StringReverse : CodeActivity<string>
{
public InArgument<string> Input { get; set; }
 
protected override string Execute(CodeActivityContext context)
{
var input = this.Input.Get(context);
return string.IsNullOrWhiteSpace(input)
? input
: new string(Enumerable.Range(1, input.Length).Select(i => input[input.Length - i]).ToArray());
}
}
}

This code should have been extremely easy to unit test, apart from two immediate problems.

  1. The protected method "Execute" is not exposed to the calling code; making it impossible to call directly.
  2. I have no idea what is required to set up a "CodeActivityContext" or how to go about doing it - as a concrete implementation it's not possible to mock.
I don't really want to create a public method I can call directly without having to worry about the "context" as this is creating code for testing sake; something that is never a good idea. Just for completeness this could be implemented as follows, but I really wouldn't recommend it!

protected override string Execute(CodeActivityContext context)
{
return this.Process( this.Input.Get(context));
}
 
public string Process(string input)
{
return string.IsNullOrWhiteSpace(input)
? input
: new string(Enumerable.Range(1, input.Length).Select(i => input[input.Length - i]).ToArray());
}

So if we don't want to create new code and expose protected functionality purely for testing, what can we do? the answer lies in the unit test itself. Checking the prototypes for the "Invoke" method of the static class WorkflowInvoker class highlights that it takes either an instance of Activity or an instance of Activity<TResult>; it's important to remember that even a complex XAML workflow is contained within a single Sequence or Flow activity, which both inherit from Activity! Checking the return value of the "Invoke" method further highlights that we should get back the return value of the code activity instance. This means our unit test can simply be:

namespace ExampleCode.Workflow.Tests
{
using System.Activities;
using Microsoft.VisualStudio.TestTools.UnitTesting;
 
[TestClass]
public class StringReverseTests
{
[TestMethod]
public void TestMethod1()
{
var output = WorkflowInvoker.Invoke(new StringReverse { Input = new InArgument<string>("123") });
Assert.AreEqual("321", output);
}
}
}

It could be argued that it's not ideal because we really haven't isolated the code we want to test, but this solution doesn't require any test only changes to the code activity. It's also extremely easy to set up and call - given that I think it's an acceptable risk. No matter what logic is contained within the code activity, the only additional complexity in this instance is the number of input arguments. Things like external dependencies (Inversion of Control) and multiple output arguments will be covered in future posts.

Tuesday, 21 February 2012

Windows Workflow: Re-introducing old anti-patterns?

As part of my day job I've been experimenting with Windows Workflow in both modifying the existing TFS2010 build templates and as a way of controlling the process flow in our new suite of applications. On the most part I've been really impressed; when you sit in a process workshop watching the business users mapping the existing steps out on a white board (or even a wall) it is quickly apparent that showing them a similar flow should hold significant benefits. Gherkin goes some way towards creating a syntax/language that works for both technical and non-technical people, but it is a test language verifying the application is working as intended - you don't write the process itself in gherkin. We've also found (from experience) that gherkin has a reasonable learning curve for both technical and non-technical users; whilst most people seem to find it easy to relate to the visual element of workflow with little or no training.

But as I opened I've been impressed for the most part with what I've seen of workflow so far. Over the past 10 years or so there has been a significant push to improve the quality of code that is produced. We have unit tests and TDD, guidelines such as SOLID and DRY all designed to aid the developer in creating code that should be easier to maintain and less bug-ridden. This has all helped and it's not often that you should come across methods inside of a class that remind you of the days of classic ASP and/or VB6 with massive blocks of procedural code containing large conditional sections of code; harbouring numerous responsibilities, reasons to change and worse still, code that can't be tested in isolation(*).

Getting back to workflow re-introducing old anti-patterns, step forward the default build template for TFS2010. I'm not sure how many people have taken a look at this workflow, or worse still had to work with / modify it? That nice happy feeling of replicating what the business wants quickly vanishes and it's like taking a step back in to code from the early 90's. Everything (and I mean everything) is in one single workflow. In the designer, even at 50% scale, you have to scroll down several pages to move through one conditional block of code. Want to find where a particular exception is thrown, it's probably quicker to open up the workflow as plain XAML and text search for the exception type and/or message text. Even if you figured out how to host the workflow to test it, if you want to test the "gated check-in" functionality you'll have to run the entire workflow from start to finish just to reach the code under test. Want to isolate a hard to test entry point into the gated check-in workflow; you'll have to figure out the exact scenario to replicate it because you can't set it up or Moq the areas you're not interested in. Sound familiar, I'm sure everyone's worked on a code base that suffered the same problems but in real code we're mostly past those problems now.

It doesn't have to be this way, workflow allows sequences and flows to be declared in separate XAML files and nested inside a larger workflow. There's absolutely no reason why the gated check-in sequence could not have been it's own XAML file; with it's own in/out arguments. It quickly becomes SOLID and DRY - it only needs to change when the requirements or process for gated check-in changes and can easily be tested in isolation. I might not be able to figure out how to host / run the entire build template, but even now I'd probably be able to throw together a workflow host application that loaded, set-up and ran a "gated check-in" XAML file.

So workflow doesn't have to re-introduce old anti-patterns but all the time we have real-world examples that contain bad practises it will be harder for less experienced developers not to replicate. It's probably worth remembering that there are many developers that have come into the workplace never having to suffer the pain of procedural code that generated many of the recognised anti-patterns. It would be a big step back for development (and workflow) if examples like TFS became common place! As a side project I'm trying to gain a full understanding of the default build template (something that also seems to be missing from the on-line Microsoft TFS documentation) and break the XAML into smaller, focused sequences / flows that are easier to understand. Workflow does look like it can successfully be the procedural glue that handles the transition between state and complex / long running process flows; but it does need to adhere to the same testing and principles as the code it contains!

(*) This sort of code might still exist but I'm happy to say that I've been lucky to work in and with teams that don't produce it.

Monday, 20 February 2012

TFS2010: Publishing solution projects to their own directories

When looking to automate a TFS2010 build one of the first issues that most people seem to encounter is that all the binaries of each project in a solution end up in the same "bin" directory. The forum post TFS 2010 BUILD SERVER: Can not keep folder tree in the drop location ? details the solution; which is changes to both the CSPROJ file and the workflow template that is called by your build. Note: each CSPROJ file in your project needs to be updated as the workflow loops through the solution finding all the referenced projects.. The answer in the forum post has everything about what is needed but not why, which can be a bit confusing if you're just starting out with TFS / workflow.

The image to the left is the section of the workflow that is changed. The workflow variable "OutputDirectory" is defined within the scope of "Compile and Test for configuration" (highlighted in green). The value of "outputDirectory" is assigned (highlighted in red) and typically just includes the the build name + the build version (so Trunk_20120220.1 would be the first time the trunk build has run on 20th Feb 2012). For the default template the value of "outputDirectory" is assigned to the input argument "OutDir" of the "Run MSBuild for project" step (highlighted in blue). In the default template it is for this reason why all the binaries of all the projects end up in a single directory. The first documented change is to modify the properties of "Run MSBuild for project", the value of "outputDirectory" is no longer passed in the input argument "OutDir" but is assigned to the input argument of "CommandLineArguments" instead.

The code below duplicates the change to the "CommmandLineArguments" input argument referred to in the forum post but highlights the interaction of "OutputDirectory". The value of which is passed as the value of a variable whose name is not important - as long as it matches the reference you add to your CSPROJ file.

String.Format(
"/p:SkipInvalidConfigurations=true {0} /p:TfsProjectPublishDir=""{1}""",
MSBuildArguments, outputDirectory)

The code below shows the corresponding change made to the CSPROJ file. Similar to the previous section the important interaction with the workflow changes are highlighted. The name you use can be any unique variable name that you like that does not clash with existing workflow / environment variables. The change does need to be made to all CSPROJ files and for all configurations that you need to build. In this instances we can build both debug and/or release and we will end up with separate binary directories for each. For each CSPROJ file change the projectname value to a value unique to the project in question.

<PropertyGroup Condition=" '$(Configuration)|$(Platform)' == 'Debug|AnyCPU' ">
...
<OutputPath Condition=" '$(TfsProjectPublishDir)'=='' ">bin\debug\</OutputPath>
<OutputPath Condition=" '$(TfsProjectPublishDir)'!='' ">$(TfsProjectPublishDir)\projectname\</OutputPath>
...
</PropertyGroup>
<PropertyGroup Condition=" '$(Configuration)|$(Platform)' == 'Release|AnyCPU' ">
...
<OutputPath Condition=" '$(TfsProjectPublishDir)'=='' ">bin\release\</OutputPath>
<OutputPath Condition=" '$(TfsProjectPublishDir)'!='' ">$(TfsProjectPublishDir)\projectname\</OutputPath>
...
</PropertyGroup>

If you're not familiar with CSPROJ files, they are just XML based files that MSBuild uses to build the referenced project. The "Condition" attribute determines whether that XML element (and any children) are passed to MSBuild. So the first "PropertyGroup" is only present if configuration and platform equal Debug and AnyCPU (respectively), the second if it is equal to Release and AnyCPU. As should be apparent this conditional logic implies that only one of these PropertyGroups should ever be presented to MsBuild and they contain the configuration/platform specific values needed for the build. This conditional logic is how MsBuild behaviour changes between the IDE and TFS. If TfsProjectPublishDir is defined then that "OutputPath" node is included, otherwise the default more familiar "OutputPath" node is presented to MsBuild.

Thursday, 16 February 2012

Improving “Boiler Plate” Data-Reader Code – Part 5

In this post we will extend the query functionality to handle stored procedures with parameters. To do this we need to create a new query type interface with an example implementation:

public interface IDefineAStoredProcedureQuery : IQuery
{
string StoredProcName { get; }
IList<SqlParameter> Parameters { get; }
}
 
public class GetCustomersByFirstName : IDefineAStoredProcedureQuery
{
public GetCustomersByFirstName(string firstName)
{
this.Parameters = new List<SqlParameter> { new SqlParameter("FirstName", firstName) };
}
 
public string StoredProcName { get { return "GetCustomersByFirstName"; } }
 
public IList<SqlParameter> Parameters { get; private set; }
}

Now that we have the ability to create stored procedure queries we need something to handle them. To do this we need a concrete implementation of the interface "IHandleAQuery":

public class StoredProcedureQueryHandler : IHandleAQuery
{
public void Assign(SqlCommand command, IQuery query)
{
var castQuery = query as IDefineAStoredProcedureQuery;
command.CommandType = CommandType.StoredProcedure;
command.CommandText = castQuery.StoredProcName;
 
if (castQuery.Parameters != null)
{
command.Parameters.AddRange((SqlParameter[])castQuery.Parameters);
}
}
}

Finally we update our factory to handle the new query interface and return the correct handler:

public static class QueryHandlerFactory
{
public static IHandleAQuery Create(IQuery query)
{
if (query is IDefineCommmandTextQuery)
{
return new HandleCommandTextQuery();
}
 
if (query is IDefineAStoredProcedureQuery)
{
return new StoredProcedureQueryHandler();
}
 
var ex = new NotSupportedException();
ex.Data.Add("IQuery Type", query.GetType());
throw ex;
}
}

This can now be called using the following code; which should highlight the power of this repository pattern. Whilst we have implemented quite a lot of code behind the scenes the only change the consumer sees is what type of query object they are passing in.

var customers = new SqlRepository(connectionString).Get(
new GetCustomersByFirstName("Paul"),
new CustomerDRConvertorPart2()).ToList();

Improving “Boiler Plate” Data-Reader Code – Part 4

In part 3 we created a SQL repository object that took a populated instance of IQuery to select/return an enumerable list of objects. A limitation of the repository was that the query had to be text based, it couldn't handle stored procedures and/or parameters. By incorporating an abstract factory pattern we can extend the functionality to handle different types of query.

The original code inside "SqlRepository.Get(...)" needs to be changed from:

using (var command = connection.CreateCommand())
{
command.CommandText = query.Text;
 
connection.Open();

To:

using (var command = connection.CreateCommand())
{
var handler = QueryHandlerFactory.Create(query);
handler.Assign(command, query);
 
connection.Open();

The static factory class takes an instance of IQuery and determines which "query handler" to return depending upon the additional interface that the "passed in" query implements. This is implemented via the following code:

public static class QueryHandlerFactory
{
public static IHandleAQuery Create(IQuery query)
{
if (query is IDefineCommmandTextQuery)
{
return new HandleCommandTextQuery();
}
 
var ex = new NotSupportedException();
ex.Data.Add("IQuery Type", query.GetType());
throw ex;
}
}

To finish off the implementation the following new interfaces and code are needed:

public interface IDefineCommmandTextQuery : IQuery
{
string Text { get; }
}
 
public interface IHandleAQuery
{
void Assign(SqlCommand command, IQuery query);
}
 
public class HandleCommandTextQuery : IHandleAQuery
{
public void Assign(SqlCommand command, IQuery query)
{
command.CommandType = CommandType.Text;
command.CommandText = ((IDefineCommmandTextQuery)query).Text;
}
}

Finally we simplify the IQuery interface as it's only property has now been moved up into IDefineCommandTextQuery and then update all concrete implementations of "IQuery" to "IDefineCommandTextQuery " as well, without these final changes the factory class will not be able to correctly determine the handler from the interface that is implemented.

public interface IQuery
{
}
 
public class GetAllCustomersQuery : IDefineCommmandTextQuery
{
public string Text
{
get
{
return "SELECT id, Firstname, Surname FROM Customer";
}
}
}

Now our code can be safely extended to handle different query types just be creating the new implementation of the query handler and modifying the factory to handle the identification and creation of the new type. Part 5 will show how this functionality can be extended to handle stored procedures with parameters.

Improving “Boiler Plate” Data-Reader Code – Part 3

In Part 1 of this series we started with a basic Data-Reader / SQL Connection/Command pattern and illustrated how it is possible to abstract the parsing of the Data Reader into a standalone object that can be fully unit tested in isolation of the calling code. In Part 2 of the series we made a very simple optimisation to the “DataReader” convertor and updated the tests to capture/verify the changes.

In part 3 of the series we put this all together into a repository pattern to create a reusable and testable data access layer. The first step is to create an interface for the repository.

namespace DataAccess.Example
{
using System.Collections.Generic;
using System.Data.BoilerPlate;
 
public interface IRespository
{
IEnumerable<TEntity> Get(IQuery query, IConvertDataReader<TEntity> dataReaderConvertor);
}
}

The intent implied by the interface is that "Get" will be responsible for returning an enumerable list of a generic. To do this we pass in an implementation of IQuery and an implementation of IConvertDataReader for the generic we are to return. We already have an implementation of IConvertDataReader that we can use from the previous post. In this example the implementation of IQuery just returns SQL text that can be executed and is shown below.

namespace DataAccess.Example
{
public interface IQuery
{
string Text { get; }
}
}
 
namespace DataAccess.Example
{
public class GetAllCustomersQuery : IQuery
{
public string Text
{
get
{
return "SELECT id, Firstname, Surname FROM Customer";
}
}
}
}

The final piece of the jigsaw is the implementation of the repository interface, in this instance for SQL but it could be any data provider that returns an implementation of IDataReader. A basic implementation of the SQL repository is shown below:

namespace DataAccess.Example
{
using System.Collections.Generic;
using System.Data.BoilerPlate;
using System.Data.SqlClient;
 
public class SqlRepository : IRespository
{
private readonly SqlConnectionStringBuilder config;
 
public SqlRepository(SqlConnectionStringBuilder config)
{
this.config = config;
}
 
public IEnumerable<TEntity> Get<TEntity>(IQuery query, IConvertDataReader<TEntity> dataReaderConvertor)
{
using (var connection = new SqlConnection(this.config.ConnectionString))
{
using (var command = connection.CreateCommand())
{
command.CommandText = query.Text;
connection.Open();
 
using (var dataReader = command.ExecuteReader())
{
while (dataReader.Read())
{
yield return dataReaderConvertor.Parse(dataReader);
}
}
}
}
}
}
}

We now have a repository object that can be used against any SQL database to return a list of any type of object that can be populated using a standard SQL SELECT statement; the code below shows how to put this together.

var customers = new SqlRepository(connectionString).Get(
new GetAllCustomersQuery(),
new CustomerDRConvertorPart2()).ToList();

To understand the power of this pattern, consider the following code that is all that is needed to update the previous code to return a different list of customers; in this case all those with the first name of "Paul".

namespace DataAccess.Example
{
public class GetAllCustomersCalledPaul : IQuery
{
public string Text
{
get
{
return "SELECT id, Firstname, Surname FROM Customer WHERE Firstname = 'Paul'";
}
}
}
}
 
var customers = new SqlRepository(connectionString).Get(
new GetAllCustomersCalledPaul(),
new CustomerDRConvertorPart2()).ToList();

Part 4 shows how the code can be modified so it can be extended to handle different "query types".

Wednesday, 15 February 2012

Managing your dependencies in NuGet

When creating NuGet packages, how do you define your dependencies?

If you're using the default setting of 'x' version or newer are you sure that all future versions of the dependency will work with the current version of your code? I'm not sure many people would be happy saying yes to that question but most NuGet packages are deployed with the default setting for their dependencies. Using a typical dependency, Log4Net, you might deploy a package today referencing the current build and everything's fine. But in a month or two's time there may be an update to Log4Net deployed that contains breaking changes. From that point on anyone that grabs your package from NuGet will find that it no longer works - instead of the version of Log4Net you developed against, they are now getting the latest version that breaks your code.

Whilst it may be more work, the safer option may be to use the version "range" option for managing dependencies; only including versions that have been safely tested and are known to work with your code base. It may be more work, requiring you to retest and update your package each time a dependency is updated but your users will thank you for it.

Update

Since writing this blog post a breaking change has been published to NuGet for the very well used log4Net package. Phil Haacks blog post covers the issue in create detail and I won't attempt to recover it here but it does highlight the risks associated with being fully dependent one or more 3rd parties - your code (and therefore your consumers code) may fail because a dependency has been incorrectly versioned / deployed. In his article Phil also links to an interesting set of posts on how NuGet handles the package versioning.

Thursday, 2 February 2012

What's New In Windows Workflow 4.5

Just as I start getting up to speed with WF4.0 MSDN magazine publishes an article detailing what's new in WF4.5. It looks like there's a lot of good stuff coming but the main thing that I noticed was that v4.0 requires full-trust to run. That shouldn't be a problem for the project we're intending to run workflow in, but if it will run in partial trust in the next release that will open up it's usage for many other applications.

Wednesday, 1 February 2012

Windows Workflow 4.0 on Twitter

I've been trying to find workflow resources on Twitter but unlike some other technologies there doesn't seem to be much regular traffic. The hashtag #WF4 does seem to be used and I've started a twitter list of people I find regularly talking about WF4.0 on-line.

Starting Windows Workflow v4.0

Today I've started learning about Windows Workflow v4.0, I'm hoping that it will help out with a current project. The original requirement was for mapping real world business processes to what would normally be complex long running computer tasks. Whilst investigating what it could do I'm beginning to think there could be real value using it for any task that requires process flow logic, even extremely short lived ones. This could be controlling page flow through a web application, or even defining the complex logic that can sometimes end up making MVC controller actions fatter than they really should be.

For getting me started a really good starting point has been the MSDN WF4.0 videos under the beginners guide, if you have a PluralSite subscription, Matt Milner presents some really useful stuff too.

As well as just learning the basics of Workflow; what really interests me is how it should be architected in a real world LOB application; taking into account the usual "-bility" issues, Scalability, Maintainability and Reliability.

  • The biggest challenge will probably be understanding how to manage multiple long running workflows that have many external dependencies inside a service that can paused, stopped, restarted and on a server that can obviously be rebooted or even crash. How does the server restart and reload all the workflows it was managing.
  • Building from that, how would this all work in a load balanced environment where you may even have 2 or more of these services running sharing the load - maybe a workflow that is started on one server will be persisted / unloaded when it becomes idle and then restarted on another server.
  • How can a workflow task be passed between different layers of a system, so it may be started on as part of a web request from a user and passed to a service to be completed a long running task.
As I start investigating each of these issues I'm hoping that this group of MSDN articles might be a good starting point.