Alan Dean

Alan Dean

CTO, Developer, Agile Practitioner

Photograph of Alan Dean

Thursday, November 17, 2011

A Definition of Done is ...

Those who work with me know that the Definition of Done is something which I consider to be vital to effective software development. For me the subtitle of any Definition of Done should be along the lines of:

As a professional, when I say that my work is Done, it means that I genuinely believe that I have no further work to do and that I have myself verified that the work is of necessary quality and completeness as well as having the work appropriately checked by others. I will honestly be surprised if my work is defective in some manner when I say it is done. Please take a look at the list below of the characteristics that I consider to be important to my work:

After this subtitle, there is the list that most Agile practitioners are familiar with.

Saturday, November 12, 2011

Unit test naming convention

I have been using TDD since before it was even called that. The practice was called test-first at the time and I learnt to do it with ComUnit against VB6 code. I suspect that there aren't many people who can put down more than a decade of of TDD on their CV. Probably due to this, I employ somewhat idiosyncratic practices, such as a distinct interface focus (both the default interface and those additionally implemented). Accordingly, I have a fairly well elaborated naming convention for my unit tests in order to make test scenario coverage more rigorous and visible.

The overall convention is public void memberKind_parameterType_parameterType_whenScenarioInfo()

Member Kinds (when default or implicit interface implementation)

  • Constructors: ctor
  • Properties: prop
  • Indexers: index
  • Methods: op, opImplicit, opExplicit
  • Generic Methods: op_MethodNameOfT, op_MethodNameOfClass, op_MethodNameOfNew, op_MethodNameOfIInterfaceName

Member Kinds (when explicit interface implementation)

  • Properties: IInterfaceName_prop
  • Indexers: IInterfaceName_index
  • Methods: IInterfaceName_op

Special Names

  • Type definition assertions: a_definition
  • Assert deserialization: xmlDeserialize, jsonDeserialize
  • Assert serialization: xmlSerialize, jsonSerialize

Parameter Types

  • Keywords: int, string
  • Null types: stringNull, TypeNameNull
  • Empty string: stringEmpty
  • Invalid string: stringInvalid
  • Specific values: longZero, intOne, decimalNegative, floatEpsilon, shortMin, DateTimeNow

When
Used to disambiguate two or more tests which otherwise have the same name, e.g. op_ToString_whenDefault(), op_ToString_whenDescriptionIsNull()

Examples
Cavity has almost complete test coverage so there are plenty of examples. Here are some:

Update

So obvious that I forgot to mention it, but worth putting on the record: each class should have exactly one test class, i.e. Example.cs has Example.Facts.cs (if xUnit) or Example.Tests.cs (if NUnit, MSTest, etc.). The period in Example.Facts or Example.Tests is to force the same sort order of units and tests in the Visual Studio solution explorer tree view. I use a pair of templates to assist me (installer available in downloads).

Wednesday, October 12, 2011

Not-so-simple NuGet Packaging

Moving beyond the simple case, here is what I did to package the Cavity log4net trace listener which I'm sharing because I encountered some frustration points.

First, the .nuspec file:

<package xmlns="http://schemas.microsoft.com/packaging/2011/08/nuspec.xsd">
  <metadata>
    <id>Cavity.Diagnostics.Log4Net</id>
    <version>1.1.0.444</version>
    <title>Cavity log4net trace listener</title>
    <authors>Alan Dean</authors>
    <owners />
    <licenseUrl>http://www.opensource.org/licenses/mit-license.php</licenseUrl>
    <projectUrl>http://code.google.com/p/cavity/</projectUrl>
    <iconUrl>http://www.alan-dean.com/nuget.png</iconUrl>
    <requireLicenseAcceptance>false</requireLicenseAcceptance>
    <description>A trace listener for log4net, allowing provider-agnostic tracing.</description>
    <summary />
    <copyright>Copyright © 2010 - 2011 Alan Dean</copyright>
    <language />
    <tags>Diagnostics</tags>
    <releaseNotes>Switched off license acceptance dialog.</releaseNotes>
    <dependencies>
      <dependency id="log4net" version="1.2.10" />
    </dependencies>
  </metadata>
  <files>
    <file src="content\app.config.transform" target="content\app.config.transform" />
    <file src="content\log4net.config.transform" target="content\log4net.config.transform" />
    <file src="content\web.config.transform" target="content\web.config.transform" />
    <file src="content\Properties\log4net.cs" target="content\Properties\log4net.cs" />
    <file src="lib\net35\Cavity.Diagnostics.Log4Net.dll" target="lib\net35\Cavity.Diagnostics.Log4Net.dll" />
    <file src="lib\net40\Cavity.Diagnostics.Log4Net.dll" target="lib\net40\Cavity.Diagnostics.Log4Net.dll" />
    <file src="tools\Install.ps1" target="tools\Install.ps1" />
  </files>
</package>

Right off the bat. you can see that more is going on here. The trace listener is dependent on log4net and I also have a bunch of content in additional to my two assemblies.

The target project will need to have either its' app.config or web.config extended so there is a transform for each. The target will need an assembly attribute applied, so I took the simple expedient of dropping log4net.cs into Properties alongside AssemblyInfo.cs. However, log4net also requires XmlConfigurator.Configure() to be called. This call needs to be in either Main() or Application_Start() and therefore existing code needs to be edited (rather than a new code file dropped in). To accomplish this, we have to use EnvDTE, as exposed in the Install.ps1 PowerShell script (I also mark the log4net.config file to copy to output during build):

param($installPath, $toolsPath, $package, $project)

$project.ProjectItems.Item("log4net.config").Properties.Item("CopyToOutputDirectory").Value = 1

try
{
  $item = $project.ProjectItems.Item("Program.cs")
}
catch [System.Management.Automation.MethodInvocationException]
{
}

if (!$item)
{
  $item = $project.ProjectItems.Item("global.asax").ProjectItems.Item("global.asax.cs")
}

$terminator = ""
if ($item.FileCodeModel.Language -eq "{B5E9BD34-6D3E-4B5D-925E-8A43B79820B4}")
{
  $terminator = ";"
}

$win = $item.Open("{7651A701-06E5-11D1-8EBD-00A0C90F26EA}")
$text = $win.Document.Object("TextDocument");
$namespace = $item.FileCodeModel.CodeElements | where-object {$_.Kind -eq 5}
$class = $namespace.Children | where-object {$_.Kind -eq 1}

$methods = $class.Children | where-object {$_.Name -eq "Main"}
if (!$methods)
{
  $methods = $class.Children | where-object {$_.Name -eq "Application_Start"}
  if (!$methods)
  {
    [system.windows.forms.messagebox]::show("methods is null")
  }
}

$edit = $methods.StartPoint.CreateEditPoint();
$edit.LineDown()
$edit.CharRight(1)
$edit.Insert([Environment]::NewLine)
$edit.Insert(" log4net.Config.XmlConfigurator.Configure()")
$edit.Insert($terminator)

I have to say that the PowerShell + EnvDTE experience was rather less than enjoyable but I got the basics of what I needed to work.

Simple NuGet Packaging

Over the last week I have started publishing my Cavity libraries on to NuGet, starting with my Unit Testing Fluent API.

The API is implemented in a single assembly with no non-BCL dependencies, which makes it the simplest case to pack for NuGet.

This is the .nuspec file:

<package xmlns="http://schemas.microsoft.com/packaging/2011/08/nuspec.xsd">
  <metadata>
    <id>Cavity.Testing.Unit</id>
    <version>1.1.0.444</version>
    <title>Cavity Unit Testing</title>
    <authors>Alan Dean</authors>
    <owners />
    <licenseUrl>http://www.opensource.org/licenses/mit-license.php</licenseUrl>
    <projectUrl>http://code.google.com/p/cavity/</projectUrl>
    <iconUrl>http://www.alan-dean.com/nuget.png</iconUrl>
    <requireLicenseAcceptance>false</requireLicenseAcceptance>
    <description>Fluent API for asserting types and properties.</description>
    <summary />
    <copyright>Copyright © 2010 - 2011 Alan Dean</copyright>
    <language />
    <tags>TDD</tags>
    <releaseNotes>Switched off license acceptance dialog.</releaseNotes>
  </metadata>
  <files>
    <file src="lib\net35\Cavity.Testing.Unit.dll" target="lib\net35\Cavity.Testing.Unit.dll" />
    <file src="lib\net40\Cavity.Testing.Unit.dll" target="lib\net40\Cavity.Testing.Unit.dll" />
  </files>
</package>

I have implemented framework targeting in my build process, so in this case I have two assemblies (.NET 3.5 and .NET 4.0) which I have copied into the lib subdirectory.

For a simple package like this, that is all you need to configure; just pack and push.

Sunday, February 20, 2011

How to conditionally skip compilation

As I mentioned in my last post, I was having problems getting an MVC web application to play nice with my framework targeting builds. Specifically, my CI server would fall over if I tried to build using Framework 3.5 when the web application was using 4.0 (due to 4.0-specific web.config settings that 3.5 does not recognise). I managed to spend a large portion of the afternoon searching for an answer until I had one of those “eureka” moments. My objective was to simply not build the web application unless the framework was 4.0 (the same would apply to any project that is framework version-specific). I tried putting a conditional on the project element. No Joy. I tried a whole bunch of other things and it was starting to look like I would have to manually configure each project to be built – very nasty, not at all what I want.

My “eureka” moment happened as I was gazing forlornly at a project file. My eye alighted upon an attribute of DefaultTargets="Build" on the project element. Build? I thought… I’ve not seen a target called that. A quick check confirmed my suspicion. The “Build” target is effectively inherited. In a flash I thought I wonder if I can intercept this? I changed the attribute to DefaultTargets="Conditional" and then added the following target and all is well again in Narnia:

<Target Name="Conditional">
<CallTarget Targets="Build" Condition=" '$(TargetFrameworkVersion)' == 'v4.0' " />
</Target>

Framework version targeting with MSBUILD

I faced a question this weekend: how best should I support conditional compilation by framework? The proximate reason for asking this was a need to use BigInteger in the Cavity project in order to be able to support 128-bit base-36 notation with a minimum of fuss. This is a new value type in Framework 4.0 and up until now Framework 3.5 have been the target for Cavity assemblies. As 3.5 is going to be around for a good while yet (heck, there is still plenty of 2.0 in production today) I didn’t want to orphan support for that version. I also didn’t want to add complexity to my build or maintenance activities. This ruled out brute force options such as creating a branch for 3.5 or creating a new solution and project files for one or other of the framework versions.

A search yielded a varied collection of advice. Some was plainly potty but a couple of StackOverflow answers [1] [2] looked promising. Taking the answers and combining a bit of common sense, I put together a quick spike to verify that I had the gist of it. Here is what I learnt:

  1. The best way to control targeting is to use property parameters with a build file. Here is a .bat file to build release versions targeting each major framework version:
    MSBUILD build.xml /p:Configuration=Release /p:TargetFrameworkVersion=v2.0
    MSBUILD build.xml /p:Configuration=Release /p:TargetFrameworkVersion=v3.5
    MSBUILD build.xml /p:Configuration=Release /p:TargetFrameworkVersion=v4.0

  2. You don’t need to do anything in the build.xml or the .sln file for this to work. You do, however, need to do a little work in the .csproj files:
    • First, you need to decide which framework version you want to work with in Visual Studio (typically, this will be the latest version) and configure TargetFrameworkVersion accordingly:

      <TargetFrameworkVersion Condition=" '$(TargetFrameworkVersion)' == '' ">v4.0</TargetFrameworkVersion>

    • To make the build output obvious, set the OutputPath to use properties:

      <OutputPath>bin\$(Configuration) $(TargetFrameworkVersion)\</OutputPath>

    • Next, you should set up some conditional property groups:
      <PropertyGroup Condition=" '$(TargetFrameworkVersion)' == 'v2.0' ">
          <DefineConstants>NET20</DefineConstants>
          <TargetFrameworkVersionNumber>2.0</TargetFrameworkVersionNumber>
      </PropertyGroup>
      <PropertyGroup Condition=" '$(TargetFrameworkVersion)' == 'v3.5' ">
          <DefineConstants>NET35</DefineConstants>
          <TargetFrameworkVersionNumber>3.5</TargetFrameworkVersionNumber>
      </PropertyGroup>
      <PropertyGroup Condition=" '$(TargetFrameworkVersion)' == 'v4.0' ">
          <DefineConstants>NET40</DefineConstants>
          <TargetFrameworkVersionNumber>4.0</TargetFrameworkVersionNumber>
      </PropertyGroup>

    • This allows you to configure framework-specific assembly references, such as System.Core and Microsoft.CSharp:

      <Reference Include="System.Core" Condition=" '$(TargetFrameworkVersionNumber)' >= '3.5' " />
      <Reference Include="Microsoft.CSharp" Condition=" '$(TargetFrameworkVersionNumber)' >= '4.0' " />

    • You should now be able to target framework versions in a straightforward manner by running the batch file above.

  3. If you want to conditionally include classes, simply apply a framework condition:
    <Compile Include="Class20.cs" Condition=" '$(TargetFrameworkVersion)' == 'v2.0' " />

  4. I also defined constants above to allow conditional compilation at code level:
    namespace Example
    {
        public sealed class Class1
        {
    #if NET20
            public void Net20()
            {
            }
    #endif
    
    #if NET35
            public void Net35()
            {
            }
    #endif
    
    #if NET40
            public void Net40()
            {
            }
    #endif
        }
    }



The only pain I have really encountered so far is getting MVC web applications to play nicely.

Tuesday, June 29, 2010

Object Pool Pattern

In the last post I discussed the Multiton pattern and this post continues the theme of non-GoF patterns by looking at Object Pool, another specialised Singleton. The purpose of this pattern is to re-use object instances to avoid creation / destruction. My mnemonic this time is a Car Pool which is just a collection of cars for my purposes:

public sealed class Car
{
    public Car(string registration)
    {
        this.Registration = registration;
    }

    public string Registration
    {
        get;
        set;
    }

    public override string ToString()
    {
        return this.Registration;
    }
}

The pool implementation also uses weak references to handle garbage collected cars which have not been explicitly returned to the pool:

using System;
using System.Collections.Generic;
using System.Linq;

public sealed class CarPool
{
    private static CarPool _pool = new CarPool();

    private CarPool()
    {
        this.Cars = new Dictionary<Car, WeakReference>();
    }

    public static int Availability
    {
        get
        {
            int value = 0;

            lock (_pool)
            {
                value = _pool.Cars.Where(x => null == x.Value || !x.Value.IsAlive).Count();
            }

            return value;
        }
    }

    private Dictionary<Car, WeakReference> Cars
    {
        get;
        set;
    }

    public static void Add(params Car[] cars)
    {
        foreach (var car in cars)
        {
            lock (_pool)
            {
                _pool.Add(car);
            }
        }
    }

    public static Car Get()
    {
        Car result = null;

        if (0 < CarPool.Availability)
        {
            lock (_pool)
            {
                var item = _pool.Cars.Where(x => null == x.Value || !x.Value.IsAlive).FirstOrDefault();

                var value = new WeakReference(item.Key);
                _pool.Cars[item.Key] = value;

                result = (Car)value.Target;
            }
        }

        return result;
    }

    public static void Return(Car car)
    {
        if (null == car)
        {
            throw new ArgumentNullException("car");
        }

        lock (_pool)
        {
            _pool.Cars[car] = null;
        }
    }

    private void Add(Car car)
    {
        this.Cars.Add(car, new WeakReference(null));
    }
}

Here is a test which verifies the expected behaviour:

using Xunit;

public sealed class ObjectPoolFacts
{
    [Fact]
    public void car_pooling()
    {
        Car one = new Car("ABC 111");
        Car two = new Car("ABC 222");
        CarPool.Add(one, two);

        Car first = CarPool.Get();
        Assert.Same(one, first);

        Car second = CarPool.Get();
        Assert.Same(two, second);

        Assert.Null(CarPool.Get());

        CarPool.Return(first);
        CarPool.Return(second);

        second = CarPool.Get();
        Assert.Same(one, second);
    }
}

Multiton Pattern

I’m doing a little ‘brushing up on the basics’ at the moment and as part of that effort I am working up some pattern examples, starting with creational patterns. These include staples such as Factory Method, Abstract Factory, Prototype, Singleton and so on but there are other creational patterns which weren’t in the Gang of Four (GoF) Design Patterns book. One of these is the Multiton. I don’t know what it’s provenance is, but it is an extension to the Singleton pattern which provides centralised access to a single collection making keys unique within scope. In my example, the singleton is declared as a static member so it has application domain scope.

I like to work up examples that feel (at least somewhat) real-world as I find that these are easier to remember later on. For the Multiton pattern I decided to use the Rolodex which is simply a collection of cards for my purposes:

public sealed class Card
{
    internal Card(string key)
    {
        this.Key = key;
    }

    public string Information
    {
        get;
        set;
    }

    public string Key
    {
        get;
        set;
    }
}

The pattern defines that item creation is handled by a static factory if the key does not exist in the collection:

using System;
using System.Collections.ObjectModel;
using System.Linq;

public sealed class Rolodex
{
    private static Rolodex _rolodex = new Rolodex();

    private Rolodex()
    {
        this.Cards = new Collection<Card>();
    }

    private Collection<Card> Cards
    {
        get;
        set;
    }

    public static Card Open(string key)
    {
        Card result = null;

        lock (_rolodex)
        {
            result = _rolodex.Cards
                .Where(x => string.Equals(x.Key, key, StringComparison.Ordinal))
                .FirstOrDefault();

            if (null == result)
            {
                result = new Card(key);
                _rolodex.Cards.Add(result);
            }
        }

        return result;
    }
}

Here is a test which verifies the expected behaviour:

Xunit;

public sealed class MultitonFacts
{
    [Fact]
    public void rolodex_card()
    {
        string key = "John Doe";

        Card expected = Rolodex.Open(key);
        expected.Information = "john.doe@example.com";

        Card actual = Rolodex.Open(key);

        Assert.Same(expected, actual);
    }
}

It’s worth pointing out that, as with all Singleton patterns, the plain vanilla pattern doesn’t lend itself to unit testing as-is. The answer is to provide a wrapper for mocking purposes. Here is an example of doing so for DateTime.UtcNow:

using System;

public static class DateTimeFactory
{
    [ThreadStatic]
    private static DateTime? _mock;

    public static DateTime Today
    {
        get
        {
            DateTime value = DateTime.Today;

            if (null != _mock)
            {
                value = _mock.Value.Date;
            }

            return value;
        }
    }

    public static DateTime UtcNow
    {
        get
        {
            DateTime value = DateTime.UtcNow;

            if (null != _mock)
            {
                value = _mock.Value;
            }

            return value;
        }
    }

    public static DateTime? Mock
    {
        get
        {
            return _mock;
        }

        set
        {
            _mock = value;
        }
    }

    public static void Reset()
    {
        DateTimeFactory.Mock = null;
    }
}

Sunday, June 27, 2010

Routing TcpClient HTTP requests through the default proxy

I’m coding an HttpClient at the moment: mostly for self-education but something useful might arise as well. Here is a trivial example of making an HTTP request using the TcpClient class:

string response = null;
System.Net.Sockets.TcpClient tcp = null;
try
{
    tcp = new System.Net.Sockets.TcpClient("www.example.com", 80);

    using (var stream = tcp.GetStream())
    {
        using (var writer = new System.IO.StreamWriter(stream))
        {
            writer.WriteLine("GET / HTTP/1.1");
            writer.WriteLine("Host: www.example.com");
            writer.WriteLine("Connection: close");
            writer.WriteLine(string.Empty);
            writer.Flush();
            using (var reader = new System.IO.StreamReader(stream))
            {
                response = reader.ReadToEnd();
            }
        }
    }
}
finally
{
    if (null != tcp)
    {
        tcp.Close();
    }
}

HTTP is simply an application-level protocol layered on top of TCP, so this works fine. However, as soon as the HttpClient becomes non-trivial then debugging becomes an issue. Thankfully we have tools in place to see at what’s happening on the wire. Wireshark is an excellent tool which watches all TCP traffic on a network adapter but it is somewhat overkill for watching just HTTP traffic. Fiddler, on the other hand, is my own tool of choice for monitoring HTTP traffic. Unfortunately the code shown above won’t appear in Fiddler as-is. Fiddler acts as a proxy and the code doesn’t cater for that. The TcpClient class doesn’t either because a web proxy works at the HTTP layer rather than TCP.

In over to overcome this limitation, we can use the WebClient class to resolve the default proxy.

var requestUri = new System.Uri("http://www.example.com/");
Uri proxy = null;
using (var web = new System.Net.WebClient())
{
    proxy = web.Proxy.GetProxy(requestUri);
}

tcp = new System.Net.Sockets.TcpClient(proxy.DnsSafeHost, proxy.Port);

Now Fiddler will now happily monitor the traffic. My thanks to @srstrong, @serialseb, @blowdart, @benlovell for helping me figure this out.