[Disclaimer: I have been given a free copy of NDepend in order to be able to write this review]
As I have released my unit testing DSL on Cavity, I thought that this would be a good opportunity to take a look at NDepend (a tool which I already knew about but hadn’t taken out for a ride, in no small part because I was not sure that I would get enough benefit to justify the license price which starts from €299). It is worth pointing out that before I ran NDepend, the assembly passed both Code Analysis and Source Analysis.
Installation is simple: unzip the download to your preferred location and run the application. Also included is support for MSBuild, CruiseControl and NAnt. Although installation is simple, I do think that also providing an MSI installer would be useful but that’s not a deal-breaker for me.
The application provides a start screen:
I didn’t bother with creating a project, I just went straight ahead and selected the Cavity.Testing.Unit.dll to analyze. When this is done, an html report is emitted and the application shifts into Quick Project mode so that you can browse the assembly. The html report provides a great deal of information and here are some sections:
Application Metrics
This section provides an overview of the gross topology of the assembly, the contained types (classes, interfaces and so on) and the maximum statistical result such as the highest cyclomatic complexity.
Assemblies Metrics
This section provides headline metrics which indicate how maintainable the assembly is. At first glance, this is all rather obtuse so the report next provides an easy-to-understand graph.
Assemblies Abstractness vs. Instability
My assembly is sitting inside the green zone, so I’m going to infer that it is broadly maintainable.
Assemblies Dependencies Diagram
I’m only looking at one assembly (and a rather trivial one at that) so this diagram isn’t very informative but I imagine that for multi-assembly situations in more complex situations it would be more useful.
CQL Queries and Constraints
At the top of this section, a colour-coded list is provided. For my assembly, eight constraints are green (pass) and nine are yellow (warning). I assume that items also can be coloured red (fail) but that my assembly doesn’t warrant it.
{Code Quality \ Type Metrics}
Three classes have warnings:
- Resources: this warning is due to the tool not ignoring classes marked as [GeneratedCode].
- PropertyExpectations<T> and TypeExpectations<T>: these are my two internal DSLs, so it's not a surprise that they have a large number of methods as I have employed method chaining.
{Design}
This isn’t especially informative so I swapped over to the application and looked at the Dependency Matrix:
Browsing through this matrix indicates that the constraint failed due to the method chaining implementation of the internal DSL. As an observation, I should say that method chaining does indeed lead to less maintainable code as changing the decision flow is non-trivial but here is an example of deliberately accepting a burden in order to achieve an objective.
There are a bunch of other constraints reported on but I think that you get the picture. Time to have a look at the application in a little more detail.
CQL Query Explorer
I’m going to take a trivial example to illustrate usage. At the bottom of the application screen there is an explorer-style listing of the CQL report information, colour-coded as in the report. I have selected Abstract base classes should be suffixed with ‘Base’ from the Naming Conventions node. When I do so, the relevant classes are highlighted in blue in the map above. When I hover my mouse over a highlighted class it changes to a pink highlight and the tooltip window on the right displays CQL information about that class. It all feel very slick and responsive. In this particular case, I have no objection to renaming the two classes with a suffix of ‘Base’ so I have gone ahead and made that change.
Conclusion
NDepend is clearly a powerful tool and I can see myself finding it useful in future projects but it is clearly an expert tool, best employed by those already comfortable with static analysis or as a means for self-education. It isn’t a ‘must-have’ tool. What would make it so, for me, would be to have the same type of Visual Studio integration as Code Analysis and Source Analysis do. I like being able to configure the static analysis at project inception, having warnings and errors emitted during build. This feels very natural to me and having to leave the IDE simply means that I’m far less likely to utilise the tool.
[Update] I should make clear that NDepend does integrate with Visual Studio. In the text above I am specifically referring to the build warning / error integration that both Code Analysis and Source Analysis provide. This means that as soon as you write the code, you will discover if you have caused an analysis issue. I am a firm believer that raising issues as soon as you write the code makes writing clean code a cheaper proposition.