Alan Dean

CTO, Developer, Agile Practitioner

Photograph of Alan Dean

Tuesday, June 29, 2010

Object Pool Pattern

In the last post I discussed the Multiton pattern and this post continues the theme of non-GoF patterns by looking at Object Pool, another specialised Singleton. The purpose of this pattern is to re-use object instances to avoid creation / destruction. My mnemonic this time is a Car Pool which is just a collection of cars for my purposes:

public sealed class Car
{
    public Car(string registration)
    {
        this.Registration = registration;
    }

    public string Registration
    {
        get;
        set;
    }

    public override string ToString()
    {
        return this.Registration;
    }
}

The pool implementation also uses weak references to handle garbage collected cars which have not been explicitly returned to the pool:

using System;
using System.Collections.Generic;
using System.Linq;

public sealed class CarPool
{
    private static CarPool _pool = new CarPool();

    private CarPool()
    {
        this.Cars = new Dictionary<Car, WeakReference>();
    }

    public static int Availability
    {
        get
        {
            int value = 0;

            lock (_pool)
            {
                value = _pool.Cars.Where(x => null == x.Value || !x.Value.IsAlive).Count();
            }

            return value;
        }
    }

    private Dictionary<Car, WeakReference> Cars
    {
        get;
        set;
    }

    public static void Add(params Car[] cars)
    {
        foreach (var car in cars)
        {
            lock (_pool)
            {
                _pool.Add(car);
            }
        }
    }

    public static Car Get()
    {
        Car result = null;

        if (0 < CarPool.Availability)
        {
            lock (_pool)
            {
                var item = _pool.Cars.Where(x => null == x.Value || !x.Value.IsAlive).FirstOrDefault();

                var value = new WeakReference(item.Key);
                _pool.Cars[item.Key] = value;

                result = (Car)value.Target;
            }
        }

        return result;
    }

    public static void Return(Car car)
    {
        if (null == car)
        {
            throw new ArgumentNullException("car");
        }

        lock (_pool)
        {
            _pool.Cars[car] = null;
        }
    }

    private void Add(Car car)
    {
        this.Cars.Add(car, new WeakReference(null));
    }
}

Here is a test which verifies the expected behaviour:

using Xunit;

public sealed class ObjectPoolFacts
{
    [Fact]
    public void car_pooling()
    {
        Car one = new Car("ABC 111");
        Car two = new Car("ABC 222");
        CarPool.Add(one, two);

        Car first = CarPool.Get();
        Assert.Same(one, first);

        Car second = CarPool.Get();
        Assert.Same(two, second);

        Assert.Null(CarPool.Get());

        CarPool.Return(first);
        CarPool.Return(second);

        second = CarPool.Get();
        Assert.Same(one, second);
    }
}

Multiton Pattern

I’m doing a little ‘brushing up on the basics’ at the moment and as part of that effort I am working up some pattern examples, starting with creational patterns. These include staples such as Factory Method, Abstract Factory, Prototype, Singleton and so on but there are other creational patterns which weren’t in the Gang of Four (GoF) Design Patterns book. One of these is the Multiton. I don’t know what it’s provenance is, but it is an extension to the Singleton pattern which provides centralised access to a single collection making keys unique within scope. In my example, the singleton is declared as a static member so it has application domain scope.

I like to work up examples that feel (at least somewhat) real-world as I find that these are easier to remember later on. For the Multiton pattern I decided to use the Rolodex which is simply a collection of cards for my purposes:

public sealed class Card
{
    internal Card(string key)
    {
        this.Key = key;
    }

    public string Information
    {
        get;
        set;
    }

    public string Key
    {
        get;
        set;
    }
}

The pattern defines that item creation is handled by a static factory if the key does not exist in the collection:

using System;
using System.Collections.ObjectModel;
using System.Linq;

public sealed class Rolodex
{
    private static Rolodex _rolodex = new Rolodex();

    private Rolodex()
    {
        this.Cards = new Collection<Card>();
    }

    private Collection<Card> Cards
    {
        get;
        set;
    }

    public static Card Open(string key)
    {
        Card result = null;

        lock (_rolodex)
        {
            result = _rolodex.Cards
                .Where(x => string.Equals(x.Key, key, StringComparison.Ordinal))
                .FirstOrDefault();

            if (null == result)
            {
                result = new Card(key);
                _rolodex.Cards.Add(result);
            }
        }

        return result;
    }
}

Here is a test which verifies the expected behaviour:

Xunit;

public sealed class MultitonFacts
{
    [Fact]
    public void rolodex_card()
    {
        string key = "John Doe";

        Card expected = Rolodex.Open(key);
        expected.Information = "john.doe@example.com";

        Card actual = Rolodex.Open(key);

        Assert.Same(expected, actual);
    }
}

It’s worth pointing out that, as with all Singleton patterns, the plain vanilla pattern doesn’t lend itself to unit testing as-is. The answer is to provide a wrapper for mocking purposes. Here is an example of doing so for DateTime.UtcNow:

using System;

public static class DateTimeFactory
{
    [ThreadStatic]
    private static DateTime? _mock;

    public static DateTime Today
    {
        get
        {
            DateTime value = DateTime.Today;

            if (null != _mock)
            {
                value = _mock.Value.Date;
            }

            return value;
        }
    }

    public static DateTime UtcNow
    {
        get
        {
            DateTime value = DateTime.UtcNow;

            if (null != _mock)
            {
                value = _mock.Value;
            }

            return value;
        }
    }

    public static DateTime? Mock
    {
        get
        {
            return _mock;
        }

        set
        {
            _mock = value;
        }
    }

    public static void Reset()
    {
        DateTimeFactory.Mock = null;
    }
}

Sunday, June 27, 2010

Routing TcpClient HTTP requests through the default proxy

I’m coding an HttpClient at the moment: mostly for self-education but something useful might arise as well. Here is a trivial example of making an HTTP request using the TcpClient class:

string response = null;
System.Net.Sockets.TcpClient tcp = null;
try
{
    tcp = new System.Net.Sockets.TcpClient("www.example.com", 80);

    using (var stream = tcp.GetStream())
    {
        using (var writer = new System.IO.StreamWriter(stream))
        {
            writer.WriteLine("GET / HTTP/1.1");
            writer.WriteLine("Host: www.example.com");
            writer.WriteLine("Connection: close");
            writer.WriteLine(string.Empty);
            writer.Flush();
            using (var reader = new System.IO.StreamReader(stream))
            {
                response = reader.ReadToEnd();
            }
        }
    }
}
finally
{
    if (null != tcp)
    {
        tcp.Close();
    }
}

HTTP is simply an application-level protocol layered on top of TCP, so this works fine. However, as soon as the HttpClient becomes non-trivial then debugging becomes an issue. Thankfully we have tools in place to see at what’s happening on the wire. Wireshark is an excellent tool which watches all TCP traffic on a network adapter but it is somewhat overkill for watching just HTTP traffic. Fiddler, on the other hand, is my own tool of choice for monitoring HTTP traffic. Unfortunately the code shown above won’t appear in Fiddler as-is. Fiddler acts as a proxy and the code doesn’t cater for that. The TcpClient class doesn’t either because a web proxy works at the HTTP layer rather than TCP.

In over to overcome this limitation, we can use the WebClient class to resolve the default proxy.

var requestUri = new System.Uri("http://www.example.com/");
Uri proxy = null;
using (var web = new System.Net.WebClient())
{
    proxy = web.Proxy.GetProxy(requestUri);
}

tcp = new System.Net.Sockets.TcpClient(proxy.DnsSafeHost, proxy.Port);

Now Fiddler will now happily monitor the traffic. My thanks to @srstrong, @serialseb, @blowdart, @benlovell for helping me figure this out.

Saturday, June 26, 2010

Run StyleCop on every build

I first started using StyleCop during a couple of projects with Microsoft Services when it was called Source Analysis and I’m a big fan because it helps makes code consistently formatted across a codebase. In order to have StyleCop run on every build, simply open the project file in the text editor of your choice (or you can unload the project from within the solution and then right-click to edit within Visual Studio) and add the Microsoft.StyleCop.targets import (I normally add it immediately after the Microsoft.CSharp.targets import):

<?xml version="1.0" encoding="utf-8"?>
<Project ToolsVersion="4.0" DefaultTargets="Build" xmlns="http://schemas.microsoft.com/developer/msbuild/2003">
    ...
    <Import Project="$(MSBuildToolsPath)\Microsoft.CSharp.targets" />
    <Import Project="$(MSBuildExtensionsPath)\Microsoft\StyleCop\v4.3\Microsoft.StyleCop.targets" />
    ...
</Project>

P.S. A plea to Microsoft: can we have the standard project and class templates pass StyleCop by default please?

Deploying to IIS7 from MSBuild

Here is an example of how to configure deployment of a web application on a development machine using MSBuild with the Extension Pack:

<?xml version="1.0" encoding="utf-8"?>
<Project DefaultTargets="Run" xmlns="http://schemas.microsoft.com/developer/msbuild/2003" ToolsVersion="3.5">

    <Import Project="$(MSBuildProjectDirectory)\lib\trove\Framework\v2.0\MSBuild.Community.Tasks.Targets" />
    <Import Project="$(MSBuildProjectDirectory)\lib\trove\Framework\v3.5\MSBuild.ExtensionPack.tasks" />

    <Target Name="Run">
        <CallTarget Targets="Clean" />
        <CallTarget Targets="Build" />
        <CallTarget Targets="Deploy" Condition="'$(registry:HKEY_LOCAL_MACHINE\Software\Microsoft\InetStp@MajorVersion)'=='7'" />
    </Target>

    <Target Name="Clean">
        <MSBuild
            Projects="$(MSBuildProjectDirectory)\src\Example.sln"
            Targets="Clean"
            Properties="Configuration=$(Configuration)"
            />
    </Target>

    <Target Name="Build">
        <MSBuild
            Projects="$(MSBuildProjectDirectory)\src\Example.sln"
            Targets="Rebuild"
            Properties="Configuration=$(Configuration)">
            <Output
                TaskParameter="TargetOutputs"
                ItemName="CodeAssemblies"
                />
        </MSBuild>
    </Target>

    <PropertyGroup>
        <WebApplicationName>www.example.net</WebApplicationName>
        <WebApplicationPath>$(MSBuildProjectDirectory)\src\Web Applications\Example</WebApplicationPath>
    </PropertyGroup>

    <Target Name="Deploy">
        <MSBuild.ExtensionPack.Web.Iis7Website
            TaskAction="CheckExists" 
            Name="$(WebApplicationName)">
            <Output
                TaskParameter="Exists"
                PropertyName="WebApplicationExists"
                />
        </MSBuild.ExtensionPack.Web.Iis7Website>
        <MSBuild.ExtensionPack.Web.Iis7Website
            TaskAction="Create"
            Name="$(WebApplicationName)"
            Path="$(WebApplicationPath)"
            Port="80"
            AppPool="ASP.NET v4.0"
            Condition="'$(WebApplicationExists)'=='False'"
            />
        <MSBuild.ExtensionPack.Web.Iis7Binding
            TaskAction="Remove"
            Name="$(WebApplicationName)"
            BindingInformation="*:80:"
            BindingProtocol="http"
            />
        <MSBuild.ExtensionPack.Web.Iis7Binding
            TaskAction="Add"
            Name="$(WebApplicationName)"
            BindingInformation="127.0.0.127:80:$(WebApplicationName)"
            BindingProtocol="http"
            Condition="'$(WebApplicationExists)'=='False'"
            />
        <MSBuild.Community.Tasks.Sleep Milliseconds="3000" />
        <MSBuild.ExtensionPack.Web.Iis7Website
            TaskAction="Stop"
            Name="$(WebApplicationName)"
            />
        <MSBuild.Community.Tasks.Sleep Milliseconds="3000" />
        <MSBuild.ExtensionPack.Web.Iis7Website
            TaskAction="Start"
            Name="$(WebApplicationName)"
            />
    </Target>

</Project>

Friday, June 25, 2010

Consistent Assembly Versioning

Personally, I like to have consistent versioning applied to all assemblies from the same build. Doing this manually is a PITA so I version from my build file and I have a pattern which I apply to achieve this. I typically use subversion for my source control and I will use the Cavity project as an example:

Subversion project structure

I build from the trunk folder:

Subversion trunk folder

I have a batch file for each ‘potted’ configuration I want and, of course, the MSBuild file. Here is the release batch file:

MSBUILD build.xml /p:Configuration=Release
PAUSE

In order to apply consistent versioning, I want to emit a Build.cs file and then link that to each project. If I want this to be a static number, I can simply configure the version directly and use AssemblyInfo task from the MSBuild Community Tasks Project:

<?xml version="1.0" encoding="utf-8"?>
<Project DefaultTargets="Run" xmlns="http://schemas.microsoft.com/developer/msbuild/2003" ToolsVersion="3.5">
    <Import Project="$(MSBuildExtensionsPath)\MSBuildCommunityTasks\MSBuild.Community.Tasks.Targets" />

    <PropertyGroup>
        <Configuration Condition="'$(Configuration)'==''">Release</Configuration>
        <Version Condition="'$(Version)'==''">1.2.3.4</Version>
    </PropertyGroup>

    <Target Name="Run">
        <CallTarget Targets="Clean" />
        <CallTarget Targets="Build" />
    </Target>

    <Target Name="Clean">
        <MSBuild
            Projects="$(MSBuildProjectDirectory)\src\Cavity.sln"
            Targets="Clean"
            Properties="Configuration=$(Configuration)"
        />
    </Target>

    <Target Name="Versioning">
        <AssemblyInfo
            CodeLanguage="CS"
            OutputFile="$(MSBuildProjectDirectory)\src\Build.cs"
            AssemblyVersion="$(Version)"
            AssemblyFileVersion="$(Version)"
            AssemblyInformationalVersion="$(Version)"
            />
    </Target>

    <Target Name="Build" DependsOnTargets="Versioning">
        <MSBuild
            Projects="$(MSBuildProjectDirectory)\src\Cavity.sln"
            Targets="Rebuild"
            Properties="Configuration=$(Configuration)">
            <Output
                TaskParameter="TargetOutputs"
                ItemName="CodeAssemblies"
                />
        </MSBuild>
    </Target>

</Project>

This will emit the following Build.cs:

//------------------------------------------------------------------------------
// <auto-generated>
//     This code was generated by a tool.
//     Runtime Version:4.0.30319.1
//
//     Changes to this file may cause incorrect behavior and will be lost if
//     the code is regenerated.
// </auto-generated>
//------------------------------------------------------------------------------

using System;
using System.Reflection;
using System.Resources;
using System.Runtime.CompilerServices;
using System.Runtime.InteropServices;

[assembly: AssemblyVersion("1.2.3.4")]
[assembly: AssemblyFileVersion("1.2.3.4")]
[assembly: AssemblyInformationalVersion("1.2.3.4")]

However, I normally use the subversion Revision number as the build number to be able to identify what was built. To do this you will need to install the CollabNet Subversion client in order to be able to query the subversion repository. Once installed, you can then use the following build file to emit a dynamic version number by utilising the SvnVersion task:

<?xml version="1.0" encoding="utf-8"?>
<Project DefaultTargets="Run" xmlns="http://schemas.microsoft.com/developer/msbuild/2003" ToolsVersion="3.5">
    <Import Project="$(MSBuildExtensionsPath)\MSBuildCommunityTasks\MSBuild.Community.Tasks.Targets" />

    <PropertyGroup>
        <Configuration Condition="'$(Configuration)'==''">Release</Configuration>
        <Version Condition="'$(Version)'==''">1.2.3</Version>
        <Revision>0</Revision>
    </PropertyGroup>

    <Target Name="Run">
        <CallTarget Targets="Clean" />
        <CallTarget Targets="Build" />
    </Target>

    <Target Name="Clean">
        <MSBuild
            Projects="$(MSBuildProjectDirectory)\src\Cavity.sln"
            Targets="Clean"
            Properties="Configuration=$(Configuration)"
            />
    </Target>

    <Target Name="Versioning">
        <SvnVersion LocalPath=".">
            <Output TaskParameter="Revision" PropertyName="Revision" />
        </SvnVersion>
        <AssemblyInfo
            CodeLanguage="CS"
            OutputFile="$(MSBuildProjectDirectory)\src\Build.cs"
            AssemblyVersion="$(Version).$(Revision)"
            AssemblyFileVersion="$(Version).$(Revision)"
            AssemblyInformationalVersion="$(Version).$(Revision)"
            />
    </Target>

    <Target Name="Build" DependsOnTargets="Versioning">
        <MSBuild
            Projects="$(MSBuildProjectDirectory)\src\Cavity.sln"
            Targets="Rebuild"
            Properties="Configuration=$(Configuration)">
            <Output
                TaskParameter="TargetOutputs"
                ItemName="CodeAssemblies"
                />
        </MSBuild>
    </Target>

</Project>

Saturday, June 19, 2010

Abstracting Service Location

A couple of days ago I blogged about the Common Service Locator, how to use the ServiceLocator and how to mock the IServiceLocator interface for unit testing purposes. However, I personally feel that the Common Service Locator library didn’t complete the job. Ideally, I should not need to choose specific IoC provider whilst writing my code (or at least should not be forced to recompile my application in order to change provider) and the library doesn’t enable this.

In order to enable this use case, I’ve published a set of lightweight libraries in the Cavity project. The key library is Cavity.ServiceLocation.dll which contains one interface and one concrete class. The interface is, frankly, trivial:

namespace Cavity.Configuration
{
    public interface ISetLocatorProvider
    {
        void Configure();
    }
}

The interface is trivial simply because it’s a hook to load the provider-specific configuration data. I have provided a plain vanilla implementation for Autofac, Castle Windsor, StructureMap and Unity because each of these supports XML configuration. Here is the implementation of the Castle Windsor ISetLocatorProvider:

namespace Cavity.Configuration
{
    using Castle.Windsor;
    using Castle.Windsor.Configuration.Interpreters;
    using CommonServiceLocator.WindsorAdapter;
    using Microsoft.Practices.ServiceLocation;

    public sealed class XmlServiceLocatorProvider : ISetLocatorProvider
    {
        public void Configure()
        {
            var container = new WindsorContainer(new XmlInterpreter());
            ServiceLocator.SetLocatorProvider(() => new WindsorServiceLocator(container));
        }
    }
}

As you can see, the code is lightweight: create a container, load it with configuration data and apply the configured container to the generic ServiceLocator and you’re done. To set up Castle Windsor as your provider, you must edit your app.config or web.config, as appropriate, as follows (having a separate castle.config is optional but generally considered preferable):

<?xml version="1.0" encoding="utf-8" ?>
<configuration>
    <configSections>
        <section
            name="castle"
            type="Castle.Windsor.Configuration.AppDomain.CastleSectionHandler, Castle.Windsor"/>
        <section
            name="serviceLocation"
            type="Cavity.Configuration.ServiceLocation, Cavity.ServiceLocation"/>
    </configSections>
    <castle configSource="castle.config" />
    <serviceLocation type="Cavity.Configuration.XmlServiceLocatorProvider, Cavity.ServiceLocation.CastleWindsor" />
</configuration>

The <serviceLocation> element type attribute points to the ISetLocatorProvider implementation you wish to use. To learn more about castle.config, see Initializing with an external configuration. To see examples of each provider, you can browse the Cavity source code.

If you prefer to specific your container via a fluent interface, then you can still employ the ISetLocatorProvider abstraction by writing a custom implementation.

More detail about using the Cavity.ServiceLocation.dll library can be seen on the Cavity Wiki. Packages for each of the four plain vanilla implementations can be downloaded as zips and the binaries are also available via trove.

Thursday, June 17, 2010

Mocking IServiceLocator

Using IoC has become far more popular in recent years but it is very easy to end up decoupling your components but at the same time end up tightly coupled to a specific provider, such as Castle Windsor. The first step to decouple the IoC provider is to utilize the Common Service Locator published by the Microsoft patterns & practices team. Here is a trivial example of decoupling using the ServiceLocator:

namespace Example
{
    using Microsoft.Practices.ServiceLocation;

    public interface IFoo
    {
        void Foo();
    }

    public sealed class FooImplementation : IFoo
    {
        public void Foo()
        {
        }
    }

    public sealed class Class1
    {
        public Class1()
        {
            this.Foo = ServiceLocator.Current.GetInstance<IFoo>();
        }

        public IFoo Foo
        {
            get;
            private set;
        }
    }
}

However, if you run the following test, a System.ArgumentNullException will be thrown as the ServiceLocator does not have a provider set:


namespace Example
{
    using Xunit;

    public sealed class Class1Facts
    {
        [Fact]
        public void ctor()
        {
            Assert.NotNull(new Class1());
        }
    }
}

Rather than configure a specific provider, it’s much cleaner to mock out the ServiceLocator. The following example uses Moq but the principle applies regardless of your preferred mocking framework:


namespace Example
{
    using Microsoft.Practices.ServiceLocation;
    using Moq;
    using Xunit;

    public sealed class Class1Facts
    {
        [Fact]
        public void ctor()
        {
            try
            {
                var mock = new Mock<IServiceLocator>();
                mock.Setup(x => x.GetInstance<IFoo>()).Returns(new FooImplementation()).Verifiable();
                ServiceLocator.SetLocatorProvider(new ServiceLocatorProvider(() => mock.Object));

                Assert.NotNull(new Class1());

                mock.VerifyAll();
            }
            finally
            {
                ServiceLocator.SetLocatorProvider(null);
            }
        }
    }
}

Trove

Treasure ChestOver the last couple of months I have been thinking about package management in the .net ecosystem. If you only develop using assemblies from Microsoft then you will not have had to think much about this but if you utilize open source assemblies then it can rapidly become a real bugbear. Some open source packages are standalone and thus don’t suffer reference problems; the HtmlAgilityPack is a good example of this as it only references the System assemblies. If you start utilizing the Castle Project (the IoC functionality in Windsor or the NHibernate functionality in ActiveRecord are commonly employed) then you will rapidly encounter reference problems, especially if you also use other libraries that also depend on Castle.

Now I’m not the first to consider this problem. HornGet takes the direct approach of building the source and publishing the binaries. I don’t know how much, if any, verification work is done but I have noticed that the builds seem to break rather frequently which doesn’t inspire confidence. As an experiment, I decided to carry out a poor man’s replication of that approach by simply writing a batch build file of a sequence of projects. One of the first issues I discovered was that the trunk often doesn’t build successfully on open source project. I suppose that I should not have been surprised, of course. I then carried out a second experiment, building from branches where available. This typically built successfully but the dependencies were incompatible, leaving you choose this or that rather than this and that. So my conclusion was that direct build doesn’t meet the need.

After direct build, there is Ruby Gems envy. I’m convinced that there was an NGem project at one time but I can’t find it and it certainly didn’t get any adoption by the community. At present I’m aware of at least three projects that want to create a solution to the problem: OpenWrap, Bricks, and CoApp. The problem is that all of these are either vapourware or alpha code and I want something useful right now.

After musing for a little while, it was clear to me that simply starting yet-another-package-management-project would be wasteful duplication. Maybe one of the currently active projects will come to something, who knows. I then got to thinking about what would be the simplest thing that could possibly work? (Ward Cunningham). Well, maybe the simplest thing would be just to work on a single package of mutually consistent assemblies. Not so hard to do either, so long as you put together some verification tests to check that the package as a whole is stable and mutually dependent. How to distribute? An easy way would be just zip up the assemblies to share that but then it occurred to me that I could simply create a Subversion repo and consumers could simply link to it using svn:externals which is really simple to configure with TortoiseSVN.

Thus was born http://code.google.com/p/trove/

There is a manifest of the contained assemblies. All you need to do is add an svn:externals property on your lib folder of “trove http://trove.googlecode.com/svn/trunk/lib” and update :-)

If you pull down the whole project, you will also get the verification tests.

At the moment, the trove contains:

If there are other libraries you think I should have in the trove, please let me know.

Wednesday, June 16, 2010

NDepend Review

[Disclaimer: I have been given a free copy of NDepend in order to be able to write this review]

As I have released my unit testing DSL on Cavity, I thought that this would be a good opportunity to take a look at NDepend (a tool which I already knew about but hadn’t taken out for a ride, in no small part because I was not sure that I would get enough benefit to justify the license price which starts from €299). It is worth pointing out that before I ran NDepend, the assembly passed both Code Analysis and Source Analysis.

Installation is simple: unzip the download to your preferred location and run the application. Also included is support for MSBuild, CruiseControl and NAnt. Although installation is simple, I do think that also providing an MSI installer would be useful but that’s not a deal-breaker for me.

The application provides a start screen:

NDepend Start Screen

I didn’t bother with creating a project, I just went straight ahead and selected the Cavity.Testing.Unit.dll to analyze. When this is done, an html report is emitted and the application shifts into Quick Project mode so that you can browse the assembly. The html report provides a great deal of information and here are some sections:

Application Metrics

Application Metrics Section

This section provides an overview of the gross topology of the assembly, the contained types (classes, interfaces and so on) and the maximum statistical result such as the highest cyclomatic complexity.

Assemblies Metrics

Assemblies Metrics Section

This section provides headline metrics which indicate how maintainable the assembly is. At first glance, this is all rather obtuse so the report next provides an easy-to-understand graph.

Assemblies Abstractness vs. Instability

Abstractness vs. Instability Graph

My assembly is sitting inside the green zone, so I’m going to infer that it is broadly maintainable.

Assemblies Dependencies Diagram

Assemblies Dependencies Diagram

I’m only looking at one assembly (and a rather trivial one at that) so this diagram isn’t very informative but I imagine that for multi-assembly situations in more complex situations it would be more useful.

CQL Queries and Constraints

CQL Queries and Constraints Section

At the top of this section, a colour-coded list is provided. For my assembly, eight constraints are green (pass) and nine are yellow (warning). I assume that items also can be coloured red (fail) but that my assembly doesn’t warrant it.

{Code Quality \ Type Metrics}

CQL Constraint {Code Quality - Type Metrics}

Three classes have warnings:

  • Resources: this warning is due to the tool not ignoring classes marked as [GeneratedCode].
  • PropertyExpectations<T> and TypeExpectations<T>: these are my two internal DSLs, so it's not a surprise that they have a large number of methods as I have employed method chaining.
{Design}

CQL Constraint {Design}

This isn’t especially informative so I swapped over to the application and looked at the Dependency Matrix:

Dependency Matrix - # namespaces

Browsing through this matrix indicates that the constraint failed due to the method chaining implementation of the internal DSL. As an observation, I should say that method chaining does indeed lead to less maintainable code as changing the decision flow is non-trivial but here is an example of deliberately accepting a burden in order to achieve an objective.

There are a bunch of other constraints reported on but I think that you get the picture. Time to have a look at the application in a little more detail.

CQL Query Explorer

I’m going to take a trivial example to illustrate usage. At the bottom of the application screen there is an explorer-style listing of the CQL report information, colour-coded as in the report. I have selected Abstract base classes should be suffixed with ‘Base’ from the Naming Conventions node. When I do so, the relevant classes are highlighted in blue in the map above. When I hover my mouse over a highlighted class it changes to a pink highlight and the tooltip window on the right displays CQL information about that class. It all feel very slick and responsive. In this particular case, I have no objection to renaming the two classes with a suffix of ‘Base’ so I have gone ahead and made that change.

CQL Query Explorer

Conclusion

NDepend is clearly a powerful tool and I can see myself finding it useful in future projects but it is clearly an expert tool, best employed by those already comfortable with static analysis or as a means for self-education. It isn’t a ‘must-have’ tool. What would make it so, for me, would be to have the same type of Visual Studio integration as Code Analysis and Source Analysis do. I like being able to configure the static analysis at project inception, having warnings and errors emitted during build. This feels very natural to me and having to leave the IDE simply means that I’m far less likely to utilise the tool.

[Update] I should make clear that NDepend does integrate with Visual Studio. In the text above I am specifically referring to the build warning / error integration that both Code Analysis and Source Analysis provide. This means that as soon as you write the code, you will discover if you have caused an analysis issue. I am a firm believer that raising issues as soon as you write the code makes writing clean code a cheaper proposition.

Tuesday, June 15, 2010

Cavity Unit Testing

I have started a new open source project on google code which I've named Cavity for no particular reason other than it's somewhat memorable: http://code.google.com/p/cavity/ and I plan on pushing up some of the code I use to accelerate and assist development, starting with an internal DSL for unit testing type and property definitions.

I’ve been doing TDD for about 9 years now and have, I admit, derived a somewhat idiosyncratic style which involves asserting all of the external characteristics of a type. I don’t agree with those who argue in favour of not testing properties (I disagree because properties have behaviour and thus ought to be verifiable). I also believe that interface implementations and attribute decorations should be verifiable (which also speaks to my belief in what I call intentional development). However there is a downside to this, which is that asserting type and property definitions is slower than not doing so (obviously) and so I use my unit test DSL to accelerate the process and thus mitigate the impedance. This has been living inside the SimpleWebServices project for a while now but I thought it was time to promote it to a more formal offering and so it is the first library available from Cavity.

To give you a flavour, here are a couple of examples of testing a class and a property:

[Fact]
public void type_definition()
{
    Assert.True(new TypeExpectations<Class1>()
        .DerivesFrom<object>()
        .IsConcreteClass()
        .IsUnsealed()
        .HasDefaultConstructor()
        .Implements<IFoo>()
        .IsDecoratedWith<CustomAttribute>()
        .Result);
}

[Fact]
public void value_definition()
{
    Assert.True(new PropertyExpectations<Class1>("Value")
        .TypeIs<string>()
        .DefaultValueIs("default")
        .Set("example")
        .ArgumentNullException()
        .ArgumentOutOfRangeException(string.Empty)
        .FormatException("invalid")
        .IsNotDecorated()
        .Result);
}

The binaries and source are zipped for download and to see more examples, please visit the wiki: