Thursday, March 7, 2013

Favor Feature Toggles over Feature Branches

As you may be aware, our development process at Pluralsight is a kanban style of agile. We limit our work in progress so that we can maximize the flow of new features into production. When you combine this with our focus on automated testing, use of a continuous integration server, a distributed source control system, and single click deploy/rollback, the result is continuous delivery. Even with a fairly small team, we almost always deploy new features more than once a week and occasionally two or three times in a single day.

Because of this style of development, we occasionally saw the Big Scary Merge described by Martin Fowler. As a team, we value continuous improvement, so we researched ways to address problems of merging when working on feature branches. So despite some skepticism, we have been experimenting with feature toggles. Many developers are familiar with the term, but there are few examples of how it is done, so allow me to share how we do it in our ASP.NET MVC web site.

How we do it

First of all, we follow Martin Fowler's advice and toggle close to the user interface. In the following sample, you can see that there is a Change User Name feature that can be toggled on and off. The view model we pass in has a property that will cause the change user name partial view to be shown only when the feature is on.

@if (Model.ShowChangeUserNameOption)
{
  @Html.Partial(MVC.Profile.Views._ChangeUserNameVm, Model.UserNameVm)
}

Now, this view model property is set by the view model builder in the application layer of our web app. As you can see, it is set using the value from the settings provider which reads directly from the web.config.

EditUserProfileVm BuildEditUserProfileVm(UserAccount userAccount)
{
  if (userAccount == null) return null;
  return new EditUserProfileVm
             {
               BasicInfo = BuildEditBasicUserProfileVm(userAccount),
               EMailInfo = BuildChangeUserEmailVm(userAccount),
               ProfileBelongsToCurrentUser = userAccount.Handle == currentUserProvider.UserHandle,
               UsageInfo = userProfileUsageInfoVmBuilder.BuildUsageInfo(userAccount.Handle),
               ShowIndividualUpsellOffer = ShowIndividualUpsellOffer(userAccount),
               ShowCorporateUpsellOffer = ShowCorporateUpsellOffer(userAccount),
               ShowChangeUserNameOption = settingsProvider.FeatureToggleUserNameCustomizationActive
             };
}

There isn't much of interest here in the settings provider other that the fact that we default the toggles to off, so no new features accidentally slip into production.

public bool FeatureToggleUserNameCustomizationActive
{
  get { return GetOptionalBooleanSettingWithDefaultValueOfFalse(SettingKeyNames.FeatureToggleUserNameCustomization); }
}

By treating the feature toggles as a first class member of the application, we can easily write unit tests to verify the behavior of the code when the feature is active and inactive. We can easily work with the new feature locally by adding a single appSetting to our local web.config files.

<add key="FeatureToggleUserNameCustomization" value="true" />

We can release our code even if this feature isn't complete because no one can access it until it is enabled. Additionally, we can deactivate it in production very easily should we discover a problem.

What we learned

During our last team retrospective, we discussed our experience with feature toggles and feature branches. After reviewing what we saw as the pros and cons of each, we decided that we favor feature toggles over feature branches. We will still use feature branches when appropriate, of course. Also, this preference does not affect the work flows of individual developers which may involve local branching.

Feature Branches

  • Pros
    • Code for a new feature cannot possibly affect production because it hasn't been deployed there
    • You can work on a feature without being concerned for how the development of other features is progressing 
    • Our continuous integration server is aware of branches and builds them automatically
    • Our distributed version control system is designed to make branching and merging painless
  • Cons
    • The Big Scary Merge is still painful despite tool support for branching
    • There is a chilling effect on refactoring

Feature Toggles

  • Pros
    • We avoid the Big Scary Merge
    • We have to address upgrade paths and testing as we develop new features rather than after they are complete
    • We integrate all in-progress features as they are developed so there are no surprises when we put them together
  • Cons
    • There is a possibility that incomplete features could negatively impact production
    • Though we haven't seen this, there could be reluctance to push to the shared repo

Monday, November 19, 2012

Someone is wrong on the internet!

Unfortunately, this time is it me. I wrote about the differences between how I perceive TDD and BDD. Unfortunately, I didn't do my research. Instead I based my understanding on hearsay and assumptions. I knew that Dan North had coined the term and I should have looked for his easy to find blog post.

Instead, I got most of my knowledge of BDD from discussions at local user groups. This almost always took the form of "Tool A is a TDD tool while Tool B is a BDD tool." Because I found this to be a fairly uninteresting distinction, through the course of several discussions, I effectively invented a definition of BDD that I found more valuable to me and my team than "using Tool B means we are a BDD shop."

When I saw a discussion of the topic on Twitter, I interjected myself and asked for clarification. Luckily, I asked good netizens (Avdi Grimm, Angela Harms) rather than trolls and I got some helpful advice.

So, I was able to learn a valuable lesson this weekend: do your research. It's easy these days and can help you contribute to the conversation rather than detract.

Thursday, October 11, 2012

Know Thyself

So you've probably heard this ancient aphorism, but what does it have to do with software development? Very few of us deliver software in isolation. We generally have customers, managers, testers, designers, operations, and fellow developers on our development and delivery teams. Because we have to interact with all of these other people, it is in our best interest to know who we are, what motivates us, and what our strengths and weaknesses are.

For the last couple of years, I have intermixed a few books and videos on psychology with my standard technical fare in an effort to get to know myself better. It appears that many in our industry are doing the same.

At the Agile Alliance 2011 conference, Dr. Barbara Fredrickson spoke about Why Care about Positive Emotions?. Linda Rising has spoken about The Agile Mindset at several conferences including Agile Roots 2012. At the recent WindyCityRails conference, Steve Klabnik spoke on Development and Philosophy. And many members of my local Ruby user group have read Dr Martin Seligman's Learned Opimism on the recommendation of Dave Brady.

Personally, I have taken the Myers-Briggs test and learned that I fall in the ENTJ/ Field-marshal personality variant. I have also read Strengths-Based Leadership in the last couple of weeks and took the associated test. My Strengths Finder results were unsurprising and correspond well with my Myers-Briggs. My top five strengths in order are:
  1. Input
  2. Communication
  3. Learner
  4. Woo
  5. Analytical
When I looked over the list, I immediately saw several synergies and began to understand why some things come easily for me, while others are quite challenging. I feel that this is a great set of strengths and describes me well.

So is there a point to all this navel gazing? I believe that by knowing ourselves, we can work better with the other members of our teams. When we know what we bring to a team, we can look for others that will bring different and complementary strengths. To quote Strengths-Based Leadership, "While the best leaders are not well-rounded, the best teams are." And don't we all want to be on the best teams?

Thursday, October 4, 2012

Tales of Distributed Teamwork

What follows are three stories of actual distributed teams that I have been on. These are my stories and my version of events. Others involved may tell different tales.
· · ·
A Tragedy in 3 Acts
Act I
The Scene
It was my first agile project! I didn't know it though. None of us knew what we were doing, but we knew that anything that got in the way of satisfying our customers had to go. We were pretty good. We were so good, in fact, that our corporate entity took control of the project.
The Team
With two developers, our field-support/QA person, and our saleswoman/business expert, this was a small, tight team who enjoyed strong support from our executive sponsor.
The Tools
The developers employed a tactic of working together (we might call it pairing now). We released more than once a week and we all worked directly with our customers daily. We also met daily to discuss sales, installations, features, and bugs.
How it Felt
It felt amazing. We had many happy customers and we worked with them to build a better product.

Act II
The Scene
Our project was a success! I was transferred to a team at headquarters and we were rolling out nationwide.
The Team
I was one of 15 developers on my team. We were one of two teams reporting to the same manager. I was the only remote team member.
The Tools
Once a week, we had a team status meeting. We also used email for anything too urgent to wait for the next meeting. My team lead would occasionally call and give me assignments that never lasted long enough to keep me busy.
How it Felt
I felt lost. I didn't know what I was supposed to do day to day. I didn't have much guidance from my team lead and I was instructed not to work directly with the customers as this was the business analysts' job.

Act III
The Scene
In a desperate attempt to reclaim the past glory of the project, I moved my family across the country. I worked in the office with my team again. This time I wore a suit every day.
The Team
The team didn't change from Act II, but I finally felt like part of it.
The Tools
We had methodology. We had documentation. We knew what everyone was going to be working on a year in advance. The small project I was developing was an anomaly because it hadn't been planned yet. Fortunately, we had meetings to resolve that.
How it Felt
Being part of the team was enough for a while, but eventually I soured on the whole project and left the company for a project were I could release code into production again.
· · ·
A Land War in Asia
The Scene
The flag ship project wasn't going to be ready in time, so the troops were rallied. New management brought new energy and ideas to the table. Additional team members and teams were hired. The project could not fail.
The Team
The teams were constantly in flux. There were either one or two co-located teams depending on the week. There were two additional teams in other time zones. Each team had developers and QA, but all the business knowledge was with the original team(s).
The Tools
Each team was supposed to be autonomous, but the reality was that we shared a common codebase, database, and source control repository. We sometimes had daily meetings between the teams, but more often than not, talking to members of the other teams was considered a waste of time.
How it Felt
There was a constant feeling of conflict. Though we all claimed to be headed together toward the same goal, each team was attempting the somehow beat the others. Any time members of the other teams made mistakes, the whole team would be attacked. Feelings were often hurt and apologies were hard-won. We were at war with ourselves.
· · ·
A Tall Ship...
The Scene
The company is very small and very successful. The product is interesting and in demand.
The Team
The team is a small group of seasoned veterans. Each person brings unique skills, but all have extensive experience. Everyone enjoys creating code and helping to build the company. Everyone uses his or her strengths to make the team better.
The Tools
We communicate constantly via email, chat, and video conferencing (Skype). We use screen sharing software (TeamViewer) daily. The CI server is constantly notifying everyone of newly completed work. The unit, integration, acceptance, and UI tests allow us to make changes very quickly without fear and with little risk.
How it Feels
The feeling from the first agile project is back. Coming to work is a joy. Releasing software is exciting rather than scary.
· · ·
The Moral of the Story
Was distributing the teams the primary cause the sad endings for the first two tales? Can a distributed team be productive? Can a distributed group of people form a cohesive team? Would co-location have solved any of the problems in these stories? Do tools make a difference? Is it the team members who make or break the team (whether distributed or not)? Is is the project that matters most? Would you work on a distributed team?

I can't answer all those questions, but I can tell you a story...

Thursday, September 27, 2012

.NET web.config Transformations Revisited

I recently posted about how to use a custom MSBuild file to run web.config transforms in your continuous integration process. This is the methods we have used on a couple of my previous teams.

At Pluralsight, we use a different method. We do our 1-Click deploys through a custom web application that takes the output of our TeamCity builds as it's input. As we built our deploy tool, we chose to avoid calling shell processes. This meant finding an alternative to the MSBuild file for web.config transforms. What we came up with is the following.

using System.IO;
using Microsoft.Web.Publishing.Tasks;

namespace SiteDeploy.SiteConfiguration
{
  public interface IConfigFileGenerator
  {
    void TransformWebConfig(string environmentName, DirectoryInfo sourceDirectory, DirectoryInfo targetDirectory);
  }

  public class ConfigFileGenerator : IConfigFileGenerator
  {
    const string webConfigFileName = @"web.config";

    public void TransformWebConfig(string environmentName, DirectoryInfo sourceDirectory, DirectoryInfo targetDirectory)
    {
      PerformTransform(sourceDirectory, targetDirectory, string.Format(@"web.{0}.config", environmentName));
    }

    private void PerformTransform(DirectoryInfo sourceDirectory, DirectoryInfo targetDirectory, string webConfigTransformFileName)
    {
      var transformer = new TransformXml
        {
          BuildEngine = new BuildEngineStub(),
          SourceRootPath = sourceDirectory.FullName,
          Source = webConfigFileName,
          Transform = webConfigTransformFileName,
          Destination = Path.Combine(targetDirectory.FullName, webConfigFileName),
        };
      transformer.Execute();
    }
}

This requires us to create a stubbed for the build engine, like so:
using System;
using System.Collections;
using Microsoft.Build.Framework;

namespace SiteDeploy.SiteConfiguration
{
  public class BuildEngineStub : IBuildEngine
  {
    const string LogFormat = "{0} : {1}";

    public void LogErrorEvent(BuildErrorEventArgs e)
    {
      Console.WriteLine(LogFormat, "ERROR  ", e.Message);
    }

    public void LogWarningEvent(BuildWarningEventArgs e)
    {
      Console.WriteLine(LogFormat, "WARNING", e.Message);
    }

    public void LogMessageEvent(BuildMessageEventArgs e)
    {
      Console.WriteLine(LogFormat, e.Importance, e.Message);
    }

    public void LogCustomEvent(CustomBuildEventArgs e)
    {
      Console.WriteLine(LogFormat, "CUSTOM ", e.Message);
    }

    public bool BuildProjectFile(
      string projectFileName,
      string[] targetNames,
      IDictionary globalProperties,
      IDictionary targetOutputs)
    {
      return true;
    }

    public bool ContinueOnError
    {
      get { return true; }
    }

    public int LineNumberOfTaskNode
    {
      get { return 0; }
    }

    public int ColumnNumberOfTaskNode
    {
      get { return 0; }
    }

    public string ProjectFileOfTaskNode
    {
      get { return string.Empty; }
    }
  }
}
Of course, we put a full suite of integration tests around the implementation so that we can mock safely it in the unit tests for the deployment tool.
using System.IO;
using Machine.Specifications;
using SiteManagement.Facade.SiteConfiguration;

namespace SiteManagement.Specs.FacadeSpecs.SiteConfiguration
{
    [Subject(typeof (ConfigFileGenerator))]
    public class With_a_config_file_generator_and_config_files
    {
        Establish context = () =>
                                {
                                    while (Directory.Exists(SourceDirectory)) Directory.Delete(SourceDirectory, true);
                                    while (Directory.Exists(TargetDirectory)) Directory.Delete(TargetDirectory, true);
                                    Directory.CreateDirectory(SourceDirectory);
                                    Directory.CreateDirectory(TargetDirectory);
                                    File.WriteAllText(Path.Combine(SourceDirectory, "web.config"), sourceConfig);
                                    File.WriteAllText(Path.Combine(SourceDirectory, "web.Stage.config"), stageConfigTransform);
                                    File.WriteAllText(Path.Combine(SourceDirectory, "web.Live.config"), liveConfigTransform);

                                    ClassUnderTest = new ConfigFileGenerator();
                                };

        static string sourceConfig = @"<?xml version=""1.0"" encoding=""utf-8""?>
<configuration>
  <appSettings>
    <add key=""EnvironmentSpecificSetting"" value=""Raw/Dev""/>
    <add key=""EnvironmentAgnosticSetting"" value=""3.1415""/>
  </appSettings>
</configuration>";

        static string stageConfigTransform = @"<?xml version=""1.0"" encoding=""utf-8""?>
<configuration xmlns:xdt=""http://schemas.microsoft.com/XML-Document-Transform"">
  <appSettings>
    <add key=""EnvironmentSpecificSetting"" value=""Stage Value"" xdt:Locator=""Match(key)"" xdt:Transform=""Replace"" />
  </appSettings>
</configuration>";

        static string liveConfigTransform = @"<?xml version=""1.0"" encoding=""utf-8""?>
<configuration xmlns:xdt=""http://schemas.microsoft.com/XML-Document-Transform"">
  <appSettings>
    <add key=""EnvironmentSpecificSetting"" value=""Live Value"" xdt:Locator=""Match(key)"" xdt:Transform=""Replace"" />
  </appSettings>
</configuration>";

        protected static string SourceDirectory = @"input";
        protected static string TargetDirectory = @"output";
        protected static string TargetFile = Path.Combine(TargetDirectory, @"web.config");
        protected static IConfigFileGenerator ClassUnderTest;
    }

    [Subject(typeof (ConfigFileGenerator))]
    public class When_transforming_a_staging_config : With_a_config_file_generator_and_config_files
    {
        Because of = () => ClassUnderTest.TransformWebConfig(@"Stage", new DirectoryInfo(SourceDirectory), new DirectoryInfo(TargetDirectory));

        It should_generate_the_file = () => File.Exists(TargetFile).ShouldBeTrue();
        It should_contain_the_transformed_data = () => File.ReadAllText(TargetFile).ShouldContain("key=\"EnvironmentSpecificSetting\" value=\"Stage Value\"");
        It should_contain_the_non_transformed_data = () => File.ReadAllText(TargetFile).ShouldContain("3.1415");
    }

    [Subject(typeof (ConfigFileGenerator))]
    public class When_transforming_a_live_config : With_a_config_file_generator_and_config_files
    {
        Because of = () => ClassUnderTest.TransformWebConfig(@"Live", new DirectoryInfo(SourceDirectory), new DirectoryInfo(TargetDirectory));

        It should_generate_the_file = () => File.Exists(TargetFile).ShouldBeTrue();
        It should_contain_the_transformed_data = () => File.ReadAllText(TargetFile).ShouldContain("key=\"EnvironmentSpecificSetting\" value=\"Live Value\"");
        It should_contain_the_non_transformed_data = () => File.ReadAllText(TargetFile).ShouldContain("3.1415");
    }
}
There are other strategies for dealing with web.config transforms including using Team Foundation Server as your CI server. Most often, I choose a 3rd party CI server, and these 2 strategies have served me well.

Thursday, September 20, 2012

Are we Agile yet?

TL;DR
It doesn't matter what process you follow, the people involved will cause the success or failure of a project.



In the beginning, there was chaos. Developer were making software, but business couldn't really manage it.

Then came waterfall. This was nice because it was easy to manage. And luckily, we now had computers to manage the schedules, because they were always slipping.

When it was realized that developers are virtually incapable of estimating how long it takes to do anything longer than a few months (or more often days), clever managers came up with a solution: shorten the cycle! It was called the spiral method, and it is an iterative waterfall. Sure the schedules still slipped, but they didn't slip as much because the iterations were shorter. Thus was the Spiral method born.

While the managers were trying to fix waterfall (because they loved it's predictability) other smart people were trying other ways to solve these problems and learning from their experiments. Eventually, they got together to talk about their findings and the Agile Manifesto was born.

Agile in the early days was awesome. It was mostly XP-like. The iterations were short. The code was test driven. The customer was part of the team. The team members had Courage and they valued Simplicity, Communication, Feedback and Respect. They had practices that helped them achieve success.

Managers saw the awesome things that were happening with these Agile Teams and they took the parts that they thought were crucial and formed a methodology for software. It was mostly Scrum-ish. There were iterations and planning and schedules and all kinds of lovely charts for the managers. The XP practices (pairing, TDD, full team engagement, continuous integration, etc.) were minimized if they were used at all. And it was mostly OK, because the teams were still doing a lot better than they had before under waterfall.

So with Scrum, we have solved all of the problems, right? Maybe, but let's what happens to a project over time:
  • For the first few sprints, the team goes really fast! Features are coming left and right. The customers are happy!
  • Then the customer start to notice things that they want changed. Some are defects that need correction, but mostly, this is just the natural process of figuring out what you REALLY want after you have seen what you asked for.
  • As the team changes the features, they start to slow down. After all, the architecture wasn't designed for these features because no one had thought of them when they were architecting.
  • The team continues to make progress, but then the testers start to complain that the old features aren't quite right anymore. Also they don't have time to thoroughly test what the developers did before because there are new features are still coming pretty quickly. Couldn't the developers please stop breaking existing features?
  • Development slows further as the amount of code increases and the number of hacks and work arounds increase. Quality starts to suffer.

I don't think that is surprising to anyone who has been involved in these projects, but that wasn't how it was in the beginning of Agile. What went wrong? The team followed Scrum, but since the developers were less experienced than the original Agilists, they didn't really follow the XP practices. Besides, everyone was Sprinting! You don't have time to pair program or learn TDD or refactor when you are Sprinting! Duh!

So, what brings success to a project isn't the process. It must be something else. It's the people. Skilled, smart, dedicated people will find a way to make the best product with the given constraints. If these people have better constraints, they will produce a better product of course. And this begs the question, how do we find these people?

Thursday, September 13, 2012

Web.config Encryption

In my last post, I showed how web.config transforms can be used to manage the complexity of config files in an ASP.NET project. One thing that often comes up in mature environments is that certain parts of the web.config are need to know only. Examples include production database passwords, payment gateway authentication keys, etc. Of course, this isn't restricted to production environments, but is most common there.

Fortunately, there is a solution to this built right into the ASP.NET engine: Encrypted Config Sections. Once the sensitive sections of the web.config transform files have been encrypted, the files can be added to source control and tracked just like any other file in the project without fear that the sensitive data will be mishandled in any way.

In order to encrypt the files, access to the production web server is required. All the following steps must be performed in an elevated (run as administrator) command prompt or they will fail with no useful exception information.

Exporting the Machine Key

Create an exportable, machine-level RSA key:
C:\Windows\Microsoft.NET\Framework\v4.0.30319\aspnet_regiis -pc "MyWebServerRSA" -exp

To export the key from one machine to use on another in the same environment/cluster:
C:\Windows\Microsoft.NET\Framework\v4.0.30319\aspnet_regiis -px "MyWebServerRSA" "MyWebServerRSA.xml" -pri
The new key file can be found in C:\ProgramData\Microsoft\Crypto\RSA\MachineKeys

Importing the Machine Key

For each machine in the web farm do the following:
Copy the exported RSA key to that machine, then import that key.

C:\Windows\Microsoft.NET\Framework\v4.0.30319\aspnet_regiis -pi "MyWebServerRSA" "MyWebServerRSA.xml" -exp

Authorizing the Machine Key

For each machine in the web farm do the following:
Authorize the user to use the key. MyWebServer is the App Pool name.

C:\Windows\Microsoft.NET\Framework\v4.0.30319\aspnet_regiis -pa "MyWebServerRSA" "IIS APPPOOL\MyWebServer"

Configuring Web Application to Use Imported Machine Key

Add the custom encryption provider to the web.config right after the configSections node. This step is necessary whenever you plan to share the encrypted configs across multiple servers.

<configprotecteddata>
<providers>
<add keyContainerName="MyWebServerRSA"
description="Uses RsaCryptoServiceProvider to encrypt and decrypt"
name="MyWebServerRSAProvider"
type="System.Configuration.RsaProtectedConfigurationProvider,System.Configuration, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a" />
</providers>
</configProtectedData>

Now encrypt the section containing the sensitive data using the configured provider. This only encrypts the web.config, not any of the transform files. In this example, we only encrypt the connectionStrings configuration node.

C:\Windows\Microsoft.NET\Framework\v4.0.30319\aspnet_regiis -pef connectionStrings C:\MyWebServer -prov MyWebServerProvider

Now the connectionStrings node is encrypted using the key on the production server. Open the web.config file and copy the whole node. Paste it into the config transform for the production environment. Add the xdt:Transform="Replace" attribute to the connectionStrings node. Without this attribute, the node from the base web.config will not be replaced by the encrypted node.

Decrypting the Encrypted Section

Due to password retention policies and staff turnover, changed to the encrypted section may be necessary. The encrypted section can be decrypted by running the following command from an elevated command prompt on the production server:

C:\Windows\Microsoft.NET\Framework\v4.0.30319\aspnet_regiis -pdf connectionStrings C:\MyWebServer

Thursday, September 6, 2012

.NET web.config Transformations

One of the nice features that has been around for a while in .NET is web.config transforms. If you are unfamiliar, these config transforms allow you to create a base config file and then use transform files which only contain the differences between different deployment environments.

Here is an example of a very simple web.config:

<?xml version="1.0" encoding="utf-8"?>
<configuration>
<appSettings>
<add key="WebTransform" value="Raw/Dev"/>
...
</appSettings>
</configuration>

And here is the web.QA.config transform file that would update the key when deployed into the QA environment:

<?xml version="1.0" encoding="utf-8"?>
<configuration xmlns:xdt="http://schemas.microsoft.com/XML-Document-Transform">
<appSettings>
<add key="WebTransform" value="QA" xdt:Locator="Match(key)" xdt:Transform="Replace" />
</appSettings>
</configuration>

This makes management of the config for each environment much simpler. The only draw back is that the transforms are only performed when either deploying manually or when using Team Foundation Server as your continuous integration system. If you happen to prefer a different CI tool, you will need to perform the config transforms a different way. Here is one way to use this feature from a command line.

First, copy the build targets to the CI agent.
The files can be found in the Visual Studio installation path in the Web folder.
For example: C:\Program Files (x86)\MSBuild\Microsoft\VisualStudio\v10.0\Web
The files that need to be copied are:
Microsoft.Web.Publishing.Tasks.dll
Microsoft.Web.Publishing.targets

Now you need to create an MSBuild file on the CI agent in the path were the project builds. For this example we will use X:\CIAgent\WebApplication. Name it build_qa.proj and add the following content:
<project defaulttargets="Deploy" toolsversion="4.0" xmlns="http://schemas.microsoft.com/developer/msbuild/2003">

<usingtask assemblyfile="$(MSBuildExtensionsPath)\Microsoft\VisualStudio\v10.0\Web\Microsoft.Web.Publishing.Tasks.dll" taskname="TransformXml">

<propertygroup>
<projectpath>X:\CIAgent\WebApplication</projectpath>
<deploypath>X:\CIAgent\WebApplication\Output</deploypath>
<transforminputfile>$(ProjectPath)\Web.config</transforminputfile>
<transformfile>$(ProjectPath)\Web.QA.config</transformfile>
<transformoutputfile>$(DeployPath)\Web.config</transformoutputfile>
<stacktraceenabled>False</stacktraceenabled>
</propertygroup>

<target name="Transform">
<transformxml destination="$(TransformOutputFile)" source="$(TransformInputFile)" stacktrace="$(StackTraceEnabled)" transform="$(TransformFile)">
</transformxml></target>
</usingtask></project>

Now from the command line, run the following command:
\Windows\Microsoft.NET\Framework\v4.0.30319\MSBuild.exe X:\CIAgent\WebApplication\build_qa.proj /t:Transform
A transformed web.config will be produced in X:\CIAgent\WebApplication\Output. This file can then be copied directly to your QA web server by using a file copy command from your CI agent.

Of course, this isn't the only way to generate these transformed web.config files. This is just the way we chose to solve the problem with one CI tool on one team. I may share additional methods in the future.

Thursday, August 30, 2012

TDD vs BDD

At a recent Utah Software Craftsmanship group meeting, I was asked to share my experiences using MSpec and explain how TDD is different from BDD. Since I have been using NUnit for years and MSpec since February, I was able to discuss some of the differences in the two styles of testing.

First, A Definition

TDD is Test Driven Development. This means writing a test that fails because the specified functionality doesn't exist, then writing the simplest code that can make the test pass, then refactoring to remove duplication, etc. You repeat this Red-Green-Refactor loop over and over until you have a complete feature.

BDD is Behavior Driven Development. This means creating an executable specification that fails because the feature doesn't exist, then writing the simplest code that can make the spec pass. You repeat this until a release candidate is ready to ship.


At this point, you should stop reading this post and instead go read Dan North's original article on BDD and, if you feel so inclined, my apology for this post. If you choose to continue here, they know that my understanding of BDD at the time I wrote this was fundamentally flawed and you should only use this article as a reference for the differences in the 3 testing tools.


Then again, there is this from Liz Keogh:

They’re called different things
The difference is that one is called Behaviour Driven Development – and some people find that wording useful – and one (or two) is called (Acceptance) Test Driven Development – and some people find that wording useful in a different way.



And that’s it.

so maybe I wasn't so far off.


Those seem pretty similar, right? They are. The key difference is the scope. TDD is a development practice while BDD is a team methodology. In TDD, the developers write the tests while in BDD the automated specifications are created by users or testers (with developers wiring them to the code under test.) For small, co-located, developer-centric teams, TDD and BDD are effectively the same. For a much more detailed discussion, InfoQ sponsored a virtual panel on the topic.

Testing Style

So if NUnit != TDD and MSpec != BDD, what is the difference between these tools? NUnit and MSpec are 2 tools that provide for different styles of developer testing. NUnit promotes the Arrange-Act-Assert style of testing while MSpec requires the Given-When-Then (or Establish context-Because of-It should) style of testing.

Let's look at an example from the bowling game kata:
//NUnit Test
[TestFixture]
public class BowlingGameTests
{
  private Game _game;

  [SetUp]
  public void SetUp()
  {
    _game = new Game();
  }

  [Test]
  public void The_score_for_a_gutter_game_is_0()
  {
    RollMany(20, 0);

    Assert.That(_game.Score() == 0);
 }

  private void RollMany(int times, int pins)
  {
    for (int i = 0; i < times; i++)
    {
      _game.Roll(pins);
    }
  }
}
//MSpec Test
public class With_a_game
{
  Establish context = () => { _game = new Game(); };

  protected static void RollMany(int times, int pins)
  {
    for (int i = 0; i < times; i++)
    {
      _game.Roll(pins);
    }
  }

  protected static Game _game;
}

public class when_rolling_a_gutter_game : With_a_game
{
  Because of = () => RollMany(20, 0);

  It should_score_zero_for_the_game = () => _game.Score().ShouldEqual(0);
}
For a more detailed example, including all the tests in for the kata in both styles, please see this github repository.

How can my team do BDD?

The key to BDD is to get the specifications from the user. In other words, create tests that are not written by developers. This means tests that are not written in a programming language. These tests should be written in a language close to English (or whatever your team speaks.) One of the oldest and best tools for this is FitNesse. The advantages of using FitNesse include
  • it facilitates thinking about features and problems in the language of business rather than the language of code
  • it requires you to focus on data in your tests
  • it can be easily included in a continuous integration environment
  • it includes a wiki for sharing information about the project
  • it requires the creation of fixtures that will help define, and refine, the API
  • it is easily shared with non-developer users

But how do I use FitNesse!?

One easy way to get started is to clone this repository and follow the instructions in the README.md. FitNesse tests consist of 2 parts: the test pages in the wiki and the fixtures that connect the pages to the code under test. In the bddtddfitnesse repo, you will find a file FinalScore.cs in the Fixtures folder. This is the fixture used by the tests.
using System;
using System.Globalization;
using System.Linq;

namespace BowlingKata.Fixtures
{
    public class FinalScore
    {
        private string[] _rolls;

        public void Rolls(string rolls)
        {
            _rolls = rolls.Trim().Split(' ');
        }

        public string Score()
        {
            var game = new Game();
            foreach (int roll in _rolls.Select(x => Convert.ToInt32(x)))
            {
                game.Roll(roll);
            }
            return game.Score().ToString(CultureInfo.CurrentCulture);
        }
    }
}
You can see that the Rolls method takes in a string and converts it to an array of rolls which are used in the Score method. The Score method uses the class under test to generate a score from the input rolls. This is then returned as a string. To see the tests that use this fixture, navigate from the FrontPage to the SuiteBowlingGame and then to the TestScoring page. This page contains some FitNesse specific setup, some text describing the rules and then a test table.
!|final score                                  |
|rolls                                  |score?|
|0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0|0     |
|1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1|20    |
|3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3|60    |
|5 5 3 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0|16    |
|10 3 4 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 |24    |
|10 10 10 10 10 10 10 10 10 10 10 10    |300   |
Each row is a separate test and the value found in the rolls column is passed into the Rolls method of the fixture. The ? on the score column tells FitNesse to query the result of the Score method on the fixture and compare it to the value in the column. When they match, the cell turns green, when they don't match, the cell turns red and the expected and actual are displayed for the user.

So TDD or BDD?

The real answer is both. You need developer tests for the fast feedback and you want user tests to ensure that the features are built to the user specs. We use FitNesse, MSpec (for unit tests), and NUnit (for UI tests) on my team at pluralsight. We try to follow the double loop described in Growing Object-Oriented Software where we write an acceptance test in FitNesse and then unit and integration tests in MSpec. Following this double loop helps us to stay focused and get features done quickly and cleanly.

Thursday, August 23, 2012

Call me Pluralsight

I don't know what to say about this, but that's ok, because this video speaks for itself:



(Also, don't get in my way, I guess.)