Monday, 31 December 2012

Looking forward to 2013

First and foremost, the biggest change in 2013 is that I will be responsible for and managing the development team. To help the team with the challenges we will face in 2013 I will need to stay hand on too, so that should make time management critical just to make sure I can personally fit it all in. This will probably be the biggest challenge I have faced in the last few years and something that I'm looking forward to getting started with.

Another change for 2013 will be refining our agile process. For the 2nd half of 2012 we followed a scrum methodology, successfully developing and deploying an MVC replacement for an old ASP legacy application. Feedback from the technical teams resulted in us modifying the process, removing task hour estimation from the sprint planning session with no measurable negative effect. Additional feedback from the business indicated that sprints were causing some friction when defining stories and planning releases. The 6-9 month development cycles of waterfall just don't work for most software applications, but at the same time, the artificial barriers that an 'x' week sprint causes can confuse the business. As we'd come to similar conclusions in my previous role, we are going to try to develop a "Kanban" methodology that will work for us.

  • We want to keep the short and frequent feedback loops and engagement with the business.
  • We also want to keep most stories as small as they can possibly be (days rather than weeks), but for some functionality we just don't want to be forced to break up a story just to make it fit within a sprint (but I must stress that we hope these to be the exception, otherwise it is just waterfall by the back door).
  • We also want to look into releasing at the end of every story so the business can get the value of development work as quickly as possible.

To help the business adjust to the harsh economic climate that we still face, the biggest challenge is going to be making sure that every single bit of effort really counts! This really does include everything, we need to improve the deployment process of all our application suite, making sure we really can live continuous integration / deployment. All development work needs to address both existing bugs / performance issues whilst providing much needed new functionality. As a part of this we should get the chance to learn Puppet, refine our TeamCity skills, potentially look at a DVCS and most importantly have fun developing!

Review of 2012

2012 has been a full-on year with lots of change. I started the year in my previous role, preparing for a transition into a newly created role of "Solution Architect"; moving away from both day to day coding and purely concentrating on .NET applications / systems. It sounded a really interesting challenge but another opportunity presented itself working for my current company in another newly created role of "Technical Team Lead". It was a hands-on development role, leading a team of 3 developers bringing a large business critical application in-house and helping to roll out scrum and other processes (such as TDD/BDD, Continuous Integration, etc.) to the business and team.

The very first challenge was the knowledge transfer sessions for the "out-sourced" application, which had been designed and developed by one company and then the maintenance passed onto a second company. It's probably fair to say that the code has grown organically rather than being designed and has undergone several architectural changes without completing or replacing the previous model(s). As a result it contains both direct SQL access via ADO.NET and at least 2 different ORM implementations. Many other areas of the code based might contain a mixture of design patterns alongside some large "procedural based" class methods. There were no automated tests of any form, and most documentation had either been lost or was now hopelessly out of date! Having just had the opportunity of working in a completely green field system in my last role, getting the chance to apply that knowledge in a large, deployed, business critical brownfield system seemed like a nice challenge (I have spent much time wondering if I was mad!)

With such an interestingly complex system we had our most success videoing the hand over sessions, recording what didn't work as much as what did work as this both gave us the instant knowledge and the ability to go back after the event to refresh our memories. This also meant that those sessions are available as a record for new members joining the team. The very first win was the ability to deploy and develop all aspects of the system locally, with instruction guides that could be followed by the entire team and worked every time on every machine (no small achievement!). We are now preparing for the production deployment of our first "in-house" developed release of this application including a few small pieces of new functionality but mostly containing massively increased logging to help track down issues reported by the business in the production environment.

As well as taking over responsibility for the main application code base, we have replaced an existing ASP system with an MVC application developed in sprints using Scrum, TDD and TeamCity Continuous Integration - a massive learning curve for the team which they handled extremely well. This system has had two functionality deployments so far, with a third currently in test. This is a big improvement over the current norm of deployments every 3 or 6 months but there is still a little way to go before we can look to release at least every sprint.

I think that on the whole the team can be really happy as we have accomplished everything that was promised in 2012, just not necessarily everything that was hoped for - but we have to leave something to improve upon in 2013!

Thursday, 13 December 2012

VS2012 does not support Silverlight 3.0

Not sure that this post really needs anything more than the title. If you are upgrading a Silverlight project from VS2008 or VS2010 be aware that VS2012 only supports versions 4 & 5 of Silverlight. If you need to update the solution / projects to use them in VS2012 I'd recommend upgrading to at least Silverlight 4 first in your existing Visual Studio and then once it's working upgrade the solution/projects to VS2012 so you're only tackling one set of issues at a time.

Saturday, 8 December 2012

Upgrading a TeamCity build from VS2008 to VS2012 and using NuGet

We've recently upgraded to Visual Studio 2012 from VS2008 and switched over to using NuGet rather than direct project references for our third party tools. Everything worked as planned until we checked the solution into source control and the personal build for TeamCity kicked off. Almost straight away the build fell over with the following error message:

D:\TeamCity\buildAgent\work\e6ae794aab32547b\.nuget\nuget.targets(102, 9):
error MSB4067: The element <ParameterGroup> beneath
element <UsingTask> is unrecognized.
Project BJ.Core.sln failed.

Our projects were still targeting .NET 3.5 but to fix the problem we needed to update the visual studio version in the build configuration

Note: we were are using Visual Studio 2012, but our Team City server is currently hosted on a 2003 Server O/S instance so we must select the VS2010 option in our build configuration (VS2012 option only works on 2008 Server and higher due to .NET 4.5 limitation).

Can't target .NET 3.5 after upgrading a Visual Studio 2008 solution to 2012

I ran into an interesting problem today when upgrading a visual studio 2008 project to visual studio 2012, whilst trying to leave the targeted framework to .NET 3.5. Each time I tried to open the solution all my test projects automatically upgraded to .NET 4.0 regardless of what I did. It was impossible to downgrade the project using either the project property page or manually editing the project file. I'd make the change and then reload the project, a project conversion report would be shown and the project was back to targeting 4.0 again.

After a little more digging around I noticed that it was only my test projects that were doing this, all the other class libraries, etc. were perfectly happy targeting 3.5. After a little more experimentation I isolated this to the project type guids section, if I removed this from the project definition then I could re-target my unit test project at v3.5 and everything was happy. Note: As the linked blog post indicates, you might lose the ability to add new tests directly from the "Add New" menu, but it would appear you can either have one or the other, but not both!

Saturday, 1 December 2012

Agile Delivery Experiences

I'm now in my second role in which I've had chance to introduce agile working practices to the team. In both roles the projects and applications developed under scrum have been successfully shipped and accepted by the business. The success of the deliveries has been measured by:

  • Functionality: The early visibility the business gained through the end of sprint demonstrations made sure that all the functionality the team developed stayed on track and provided exactly what the business wanted. There were no nasty surprises once the final code was shipped!
  • On Time: It is generally agreed that you can only have two of the following: quality, functionality and/or a "business set" ship date. The hardest lesson for a business to learn is that if it wants a set amount of functionality by a set ship date, then the only thing that can be modified by the team to meet the expectations is quality. Luckily in both instances everyone agreed that quality shouldn't be compromised and the business knew the functionality it needed, so the ship date was determined from our velocity and the story points for the desired functionality. In a couple of instances we didn't complete everything the team had committed to for a sprint, but the business was happy, because whilst the delivery did slip a sprint, they were confident it was within the team's ability to meet the goal.
  • Bug Free: Experience has shown that shipping the desired functionality on an agreed date is rare enough but after launch the applications have experienced an extremely low defect rate - I've never experienced this prior to switching to an agile development methodology. A TDD approach was used in both roles, but I really believe the benefit of TDD is in ongoing maintenance whilst the initial low defect count can be attributed to the scrum process.

I'm now looking forward to my next challenge/project to apply everything we've learnt in the past projects; hopefully with the same successes.

Do "Task Hours" add anything in Scrum (Agile)?

What do task hours add to the overall process in scrum?

This was a question that has arisen from all team members in both instances that I've helped teams switch over to scrum. The benefits of artifacts like the comparative story point estimation, the 2 week sprints, stand-ups and the end of sprint demo have been self evident to the team, but as one I think every team member has expressed dismay when it comes to task planning and estimating each task in hours. Left unchecked there is a natural tendency for people to actually begin to dread the start of each sprint purely due to the task planning session.

In my current role we've been lucky to investigate this further as a team.

The team sat down to discuss the problems it was experiencing with estimating tasks in hours and the following common themes appeared:

  • It is hard: Maybe it shouldn't be, but time estimation is hard! Story points are comparative and abstracted making them easier to determine, but time estimate is generally expected to be pretty exact and accurate. There is no getting away from the fact that time estimation is just plain difficult (and not enjoyable) for most people!
  • It takes too long: The discussion around how long a task will take takes time, each story might have 5+ tasks; so it could end up taking an hour just to task (and time estimate) a single story! It might be easy to agree in seconds that we need a task to add new functionality to a service, but a discussion to estimate the duration of the task will take several minutes, if not longer.
  • Outcome of one task affects the duration of a subsequent task: If you have two tasks, one to create / update Gherkin and another to create/update the bindings. How long the second task takes is directly proportional to how many features / scenarios change or are created in the first task. This is nigh on impossible to estimate at the task planning stage.
  • It doesn't involve all the team: When the testers are discussing an estimate for the testing task, the developers are under-utilised and vice versa. We tried having two task estimate discussions going on at the same time and that didn't work very well either.
  • Working to hours, rather than done: We all seemed to realise that there was a natural tendency to sometimes work to the hours rather than the definition of done! If you knew you only had 30 minutes left on the task estimate it is very hard to add those couple of extra tests that would really trap those edge cases - it's much easier to finish the task on time and put off those tests to another task that might finish earlier than estimated. This is re-enforced by the positive comments that generally arise from the "actual" hours line on the sprint burn down graph following the estimate line.
  • It ignores individual ability: A team is made up of individuals of varying abilities and knowledge. I might estimate a task to take an hour because I worked on something similar last sprint, or am familiar with that functionality, whilst somebody else with different experience will estimate the same task as 3 hours. We are both right in our own estimations, but we shouldn't be allocating tasks at this stage, so which estimate should you use?
  • Compromise reduces value of the process: Leading on from the previous point, if the individual time estimates for a task range from 1 hour to 3 hours, you are looking at a compromise - this tends to work for story points but doesn't lend itself to task hour estimates as it results in a very wobbly line in the burn down graph.
  • Discussing the difference for longer than the difference itself: Continuing this theme; the difference between the largest/smallest estimates may be an hour. If you have 7 people around the table, the moment you spend longer than 8-9 minutes discussing the task (it can happen) you've spent longer (in man hours) discussing it than the difference you are discussing!

Sitting down with the product owner, it was story points / velocity which gave them the confidence that we could deliver what we had committed to.

Sitting down with the scrum master, they understood the concerns of the team and in light of comments made by the product owner, agreed to run a trial sprint where stories would be tasked, but no hours would be assigned. A commitment was made by the team that we wouldn't slip back into a waterfall approach and that the ability to flag potential issues during they daily stand ups shouldn't suffer.

We are now in sprint 3 of our trial, and I'm happy to say that the removal of task hour estimation hasn't had any measurable negative impact. Personally, having worked in an agile environment for two years, it did seem strange to be missing the sprint burn-down graph, but now we are further into the trial it has just proven to us that it was just giving us false confidence without providing any actual benefit! Our velocity and ability to commit to / deliver stories hasn't seen any negative impact, whilst we've trimmed 3-4 hours off the task planning meeting every 2 weeks (14-28 man hours!) and everyone starts the sprint feeling ready for it, rather than worn down by the longer meetings that were needed for estimating the task hours!

If you and your team have reached the point where you now dread the start of new sprints purely due to task planning session, why not give dropping the "hours" estimating part of task planning a go and let me know how it worked out for you.