0 Comments

ClickOnce seems to be quite maligned on the internet, but I think its a nice simple publishing technology, as long as your application is small and doesn’t need to do anything fancy on installation. It offers automatic updating as well, which is very nice.

Anyway I was getting annoyed that the only way I could deploy a new version of this desktop WPF app was by going into Visual Studio, right-click the Project –> Properties –> Publish. Then I had to go in and do things with the version, and make sure the other settings were correct.

It was all just too many clicks and too much to remember.

Complicating matters is that the app has 3 different build configurations, development, staging and release. Switching to any one of those build configurations changed the ClickOnce publish settings and the API endpoint used by the application. However, sometimes Visual Studio would just forget to update some of the ClickOnce publish settings, which caused me to publish a development application to the staging URL a couple of times (and vice versa). You had to actually reload the project or restart Visual Studio in order to guarantee that it would deploy to the correct location with the correct configuration. Frustrating (and dangerous!).

A little bit more information about the changes that occur as a result of selecting a different build configuration.

The API endpoint is just an app setting, so it uses Slow Cheetah and config transforms.

The publish URL (and supporting ClickOnce publish information) is stored in the csproj file though, so it uses a customised targets file, like this:

<Project ToolsVersion="3.5" xmlns="http://schemas.microsoft.com/developer/msbuild/2003">
    <PropertyGroup Condition="'$(Configuration)' == 'development-release'">
        <PublishUrl>[MAPPED CLOUD DRIVE]\development\</PublishUrl>
        <InstallUrl>[PUBLICALLY ACCESSIBLE INSTALL URL]/development/</InstallUrl>
    </PropertyGroup>
    <PropertyGroup Condition="'$(Configuration)' == 'staging-release'">
        <PublishUrl>[MAPPED CLOUD DRIVE]\staging\</PublishUrl>
        <InstallUrl>[PUBLICALLY ACCESSIBLE INSTALL URL]/staging/</InstallUrl>
    </PropertyGroup>
    <PropertyGroup Condition="'$(Configuration)' == 'production-release'">
        <PublishUrl>[MAPPED CLOUD DRIVE]\production\</PublishUrl>
        <InstallUrl>[PUBLICALLY ACCESSIBLE INSTALL URL]/production/</InstallUrl>
    </PropertyGroup>
</Project>

This customised targets file was included in the csproj file like this:

<Import Project="$(ProjectDir)\Customized.targets" />

So, build script time! Mostly to make doing a deployment easier, but hopefully also to deal with that issue of selecting a build configuration and not having the proper settings applied.

Like everything involving software, many Yaks were shaved as part of automating this deployment.

Automation

My plan was to create 3 scripts, one to deploy to each environment. Those 3 scripts should use a single script as a base, so I don’t create a maintenance nightmare. This script would have to be parameterised around the target configuration. Sounds simple enough.

Lets look at the finished product first, then we’ll go into each section in detail.

@ECHO OFF

SET publish_type=%1
SET install_url=[PUBLICALLY ACCESSIBLE INSTALL URL]/%publish_type%/
SET configuration=%publish_type%-release
SET remote_publish_destination=[MAPPED CLOUD DRIVE]\%publish_type%\

SET timestamp_file=publishDate.tmp
tools\date.exe +%%Y%%m%%d%%H%%M%%S > %timestamp_file%
SET /p timestamp_directory= < %timestamp_file%
DEL %timestamp_file%

REM %~dp0 is the directory containing the batch file, which is the Solution Directory.
SET publish_output=%~dp0publish\%publish_type%\%timestamp_directory%\
SET msbuild_output=%publish_output%msbuild\

tools\NuGet.exe restore [SOLUTION FILE]

"C:\Program Files (x86)\MSBuild\12.0\bin\msbuild.exe" [SOLUTION FILE] /t:clean,rebuild,publish /p:Configuration=%configuration%;PublishDir=%msbuild_output%;InstallUrl=%install_url%;IsWebBootstrapper=true;InstallFrom=Web

IF ERRORLEVEL 1 (
    ECHO Build Failure. Publish terminated.
    EXIT /B 1
)

REM Add a small (1 second) delay here because sometimes the robocopy fails to delete the files its moving because they are in use (probably by MSBUILD).
TIMEOUT /T 1

robocopy %msbuild_output% %remote_publish_destination% /is /E

Not too bad. It fits on one screen, which is a nice. The script itself doesn’t actually do all that much, just leverages MSBUILD and the build configurations that were already present in the solution.

The script lives inside the root directory of my solution, and it does everything relative to the directory that it’s in (so you can freely move it around). Scripts with hardcoded directories are such a pain to use, and there’s not even a good reason to do it. Its just as easy to do a relative script.

Alas, its not perfectly self contained. It is reliant on Visual Studio 2013 being installed, and a couple of other things (which I will mention later). Ah well.

Variables

First up, the script sets some variables that it needs to work with. The only parameter supplied to the script is the publish or deployment type (for me that’s development, staging or release). It then uses this value to select the appropriate build configuration (because they are named similarly) and the final publish location (which is a publically accessible URL).

Working Directory

Secondly, the script creates a working directory using the type of publish being performed and the current date and time. I’ve used the “date” command-line tool for this, which I extracted (stole) from a set of Unix tools that were ported to Windows. Its completely self contained (no dependencies, yeah!) so I’ve just included it in the tools directory of my repository. If you’re wondering why it creates a file to put the timestamp into, this was because I had some issues with the timestamp not evaluating correctly when I just put it directly inside a variable. The SET /P line allows you to set a variable using input from the user, and the input that it is supplied with is the contents of the file (using the < operator). More tricksy than I would like, but it gets the job done.

My other option was to write a simple C# command line application to get the formatted timestamp for the directory name myself, but this was a good exercise in exploring batch files. I suppose I could have also used ScriptCS to just do it inline (or Powershell) but that would have taken even more time to learn (I don’t have a lot of Powershell experience). This was the simplest and quickest solution in the end.

NuGet Package Restore

Third, the script restores the packages that the solution uses via NuGet. There are a couple of ways that you can do this inside the actual csproj files in the solution (using the MSBUILD targets and tasks), but I find that it’s just easier to call NuGet.exe directly from the command line, especially if you’re already in build script land. Much more obvious about what’s going on.

Build and Publish

Fourth, we finally get to the meat of the build script, where it farms out the rebuild and publish to MSBUILD. You can set basically any property from the command line when doing a build through MSBUILD, and in this case it sets the selected build configuration, a local directory to dump the output to and some properties related to the eventual location of the deployed application.

The reason it sets the IsWebBootstrapper and InstallFrom properties on the command line is because I’ve specifically set the ClickOnce deployment property values in the project file to be non-functional. This is to prevent people from publishing without using the script, which as mentioned previously, can actually be a risky proposition due to the build configurations.

The build and publish is more complicated than it appears though, and the reason for that is versioning.

Versioning

Applications deployed through ClickOnce have 2 version related attributes.

The first is the ApplicationVersion, and the second is MinimumRequiredVersion.

ApplicationVersion is the actual version of the application that you are deploying. Strangely enough, this is NOT the same as the version defined in the AssemblyInfo file of the project. This means that you can publish Version 1.0.0.0 of a ClickOnce application and have the actual deployed exe not be that version. In fact, that’s the easiest path to take. It takes significantly more effort to synchronize the two.

I don’t like that.

Having multiple identifiers for the same piece of software is a nightmare waiting to happen. Especially considering that when you actually try to reference the version from inside some piece of C#, you can either use the normal way (checking the version of the executing assembly) or you can check the ClickOnce deployment version.

Anyway, MinimumRequiredVersion is for forcing users of the ClickOnce application to update to a specific version. In this case, the product owner required that the user always be using the latest version (which I agree with), so MinimumRequiredVersion needed to be synchronized with ApplicationVersion.

ClickOnce seems to assume that someone will be manually setting the ApplicationVersion (and maybe also the MinimumRequiredVersion) before a deployment occurs, and isn’t very friendly to automation.

I ended up having to write a customised MSBUILD task. Its nothing fancy (and looking back at it, I’m pretty sure there are many, better, ways to do it, maybe even using the community build tasks) but it gets the job done. You can see the source code of the build task here.

It takes a path to an AssemblyInfo file, reads the AssemblyVersion attribute from it, sets the build and revision versions to appropriate values (build is set to YYDDD i.e. 14295, revision is set to a monotonically increasing number, which is reset to 0 on the first build of each day), writes the version back to the AssemblyInfo file and then outputs the generated version, so that it can be used in future build steps.

I use this custom task in a customised.targets file which is included in the project file for the application (in the same way as the old project customisations were included above).

This is what the targets file looks like.

<?xml version="1.0" encoding="utf-8"?>
<Project xmlns="http://schemas.microsoft.com/developer/msbuild/2003">
    <PropertyGroup>
        <GeneratedAssemblyVersion></GeneratedAssemblyVersion>
    </PropertyGroup>
    <UsingTask TaskName="Solavirum.Build.MSBuild.Tasks.ReadUpdateSaveAssemblyInfoVersionTask" AssemblyFile="$(SolutionDir)\tools\Solavirum.Build.MSBuild.Tasks.dll" />
    <Target Name="UpdateVersion">
        <Message Text="Updating AssemblyVersion in AssemblyInfo." Importance="high" />
        <ReadUpdateSaveAssemblyInfoVersionTask AssemblyInfoSourcePath="$(ProjectDir)\Properties\AssemblyInfo.cs">
            <Output TaskParameter="GeneratedVersion" PropertyName="GeneratedAssemblyVersion" />
        </ReadUpdateSaveAssemblyInfoVersionTask>
        <Message Text="New AssemblyVersion is $(GeneratedAssemblyVersion)" Importance="high" />
        <Message Text="Updating ClickOnce ApplicationVersion and MinimumRequiredVersion using AssemblyVersion" Importance="high" />
        <CreateProperty Value="$(GeneratedAssemblyVersion)">
            <Output TaskParameter="Value" PropertyName="ApplicationVersion" />
        </CreateProperty>
        <CreateProperty Value="$(GeneratedAssemblyVersion)">
            <Output TaskParameter="Value" PropertyName="MinimumRequiredVersion" />
        </CreateProperty>
        <!-- 
        This particular property needs to be set because of reasons. Honestly I'm not sure why, but if you dont set it
        the MinimumRequiredVersion attribute does not appear correctly inside the deployment manifest, even after setting
        the apparently correct propery above.
        -->
        <CreateProperty Value="$(GeneratedAssemblyVersion)">
            <Output TaskParameter="Value" PropertyName="_DeploymentBuiltMinimumRequiredVersion" />
        </CreateProperty>
    </Target>
    <Target Name="BeforeBuild">
        <CallTarget Targets="UpdateVersion" />
    </Target>
</Project>

Its a little hard to read (XML) and it can be hard to understand if you’re unfamiliar with the way that MSBUILD deals with…things. It has a strange way of taking output from a task and doing something with it, and it took me a long time to wrap my head around it.

From top to bottom, you can see that it creates a new Property called GeneratedAssemblyVersion and then calls into the custom task (which is available in a DLL in the tools directory of the repository). The version returned from the custom task is then used to set the ApplicationVersion and MinimumRequiredVersion properties (and to log some statements about what its doing in the build output). Finally it configures the custom UpdateVersion target to be executed before a build.

Note that it was insufficient to just set the MinimumRequiredVersion, I also had to set the _DeploymentBuiltMinimumRequiredVersion. That took a while to figure out, and I still have no idea exactly why this step is necessary. If you don’t do it though, your MinimumRequiredVersion won’t work the way you expect it to.

Now that we’ve gone on a massive detour around versioning, its time to go back to the build script itself.

Deployment

The last step in the build script is to actually deploy the contents of the publish directory filled by MSBUILD to the remote URL.

This publish directory typically contains a .application file (which is the deployment manifest), a setup.exe and an ApplicationFiles directory containing another versioned directory with the actual EXE and supporting files inside. Its a nice clean structure, because you can deploy over the top of a previous version and it will keep that old versions files around and only update the .application file (which defines the latest version and some other stuff).

For this application, the deployment URL is an Amazon S3 storage account, parts of which are publically accessible. I use TNTDrive to map a folder in the S3 account to a local drive, which in my case is Y.

With the deployment location available just like a local drive, all I need to do is use robocopy to move the files over.

I’ve got some basic error checking before the copy (just to ensure that it doesn’t try to publish if the build fails) and I included a short delay before the copy because of some issues I was having with the files being locked even though MSBuild had returned.

Future Points of Improvement

I don’t like the fact that someone couldn’t just clone the repository that the build script is inside and run it. That makes me sad. You have to install the TNTDrive client and then map it to the same drive letter as the script expects. A point of improvement for me would be to use some sort of portable tool to copy into an Amazon S3 account. I’m sure the tool exists, I just haven’t had a chance to look for it yet. Ahh pragmatism, sometimes I hate you.

A second point of improvement would be to make the script run the unit and integration tests and not publish if they fail. I actually had a prototype of this working at one point, but it needed more work and I had to pull back from it in order to get the script completed in time.

Lastly, it would be nice if I had a build server for this particular application. Something like TeamCity or AppVeyor. I think this is a great idea, but for this particular application (which I only work on contractually, for a few hours each week) its not really something I have time to invest in. Yet.

Conclusion

ClickOnce is a great technology, but it was quite an adventure getting a script based deployment to work. Documentation in particular is pretty lacklustre, especially around what exactly some of the deployment properties mean (and their behaviour).

Still, the end result solved my initial problems. I no longer have to worry about accidentally publishing an application to the wrong place, and I can just enter one simple command from the command line to publish. I even managed to fix up some versioning issues I had with the whole thing along the way.

Victory!

0 Comments

Yeah, sorry about the title. They can’t all be winners.

Anyway, I tracked down an interesting bug a few weeks ago, and I thought that it might be worthwhile to discuss it here, so that I can see it later when the memory fades. Also it might help someone else, which is nice, I guess.

The Problem

There was a web application. Some performance/load testing was being completed (for the first time, after development has “completed”, because who needs an Agile approach, right?) and the results showed that there was an unreasonable amount of failures during login. Something in the realm of 10% of all login attempts under a medium load would fail.

The Cause

At the root, the bug involved this class:

public class ExecutionEngine
{
    public string ConnectionString { get; set; }
    public SqlCommand Command { get; set; }
    public DataSet Result { get; set; }

    public void Execute()
    {
        var conn = new SqlConnection(ConnectionString);
        Command.Connection = conn;
        var adapter = new SqlDataAdapter(Command);

        Result = new DataSet();
        adapter.Fill(Result);
    }
}

Pretty terrible all things considered. Weird way to use SqlCommands and DataSets, but okay.

A typical usage of the ExecutionEngine class was as follows:

public class Demonstration
{
    public Demonstration(ExecutionEngine engine)
    {
        _Engine = engine;
    }

    private readonly ExecutionEngine _Engine;

    public IEnumerable<string> GetTheThingsForAUser(string userId)
    {
        _Engine.Command.CommandText = "GetAListOfAllTheThings";
        _Engine.Command.CommandType = CommandType.StoredProcedure;

        _Engine.Command.Parameters.Clear();
        _Engine.Command.Parameters.AddWithValue("UserId", userId);

        _Engine.Execute();

        var allTheThings = new List<string>();
        foreach (DataRow thing in _Engine.Result.Tables[0].Rows)
        {
            allTheThings.Add((string)thing["Name"]);
        }

        return allTheThings;
    }
}

There were a LOT of usages like the demonstration class above (100+), jammed into one SUPER-class called “DataAccessLayer”. This “DAL” was a dependency of the business classes, which were used by the rest of the system. An instance of a business class would be instantiated as needed, which in turn would resolve its dependencies (using Ninject) and then be used to service the incoming request.

Given that I’ve already spoiled the ending by mentioning threading in the title of this post, you can probably guess that there was a threading problem. Well, there was.

The ExecutionEngine class is obviously not thread-safe. At any point in time if you have one instance of this class being used on multiple threads, you could conceivably get some very strange results. Best case would be errors. Worst case would be someone else’s data!To illustrate:

  1. Thread A enters GetTheThingsForAUser
  2. Thread A sets the command text and type to the appropriate values.
  3. Thread B enters GetTheThingsForAUser
  4. Thread A clears the existing parameters and adds its User Id
  5. Thread B clears the parameters and adds its User Id
  6. Thread A executes, grabs the result and returns it. Thread A just returned the values for a completely different user that it asked for, but has given no indicationof this!

At the very least, the developer who created the class had thought about thread-safety (or someone had thought about it later).

public class DataAccessLayerModule : NinjectModule
{
    public override void Load()
    {
        Bind<ExecutionEngine>().ToSelf().InThreadScope();
    }
}

For those of you unfamiliar with the Thread scope, it ensures that there is one instance of the class instantiated per thread, created at the time of dependency resolution. It adds thread affinity to classes that don’t otherwise have it, but ONLY during construction.

At least there will be only one instance of this created per thread, and a single thread isn’t going to be jumping between multiple method executions (probably, I’m not sure to be honest) so at least the lack of thread safety might not be an issue.

This was, of course, a red herring. The lack of thread-safety was EXACTLY the issue. It took an embarrassingly large amount of time for me to track down the root cause. I debugged, watched the business objects being instantiated and then watched the execution engine being injected into them…with the correct thread affinity. Only the latest version of this web application was experiencing the issue, so it had to have been a relatively recent change (although this particular project did have quite a long and…storied history).

The root issue turned out to be the following:

public class DataAccessLayerModule : NinjectModule
{
    public override void Load()
    {
        // Some bindings.

        Bind<ExecutionEngine>().ToSelf().InThreadScope();

        // Some more bindings.

        Bind<Helper>().To<DefaultHelper>().InSingletonScope();
    }
}

See that second binding there, the one that’s a Singleton? It had a dependency on the ExecutionEngine. This of course threw a gigantic spanner in the works, as an instance of the ExecutionEngine was no longer only being used on one thread at a time, leaving it wide open to concurrency issues (which is exactly what was happening).

If you’re unfamiliar with the Singleton scope, it basically means that only one instance of the class is going to be instantiated in the application. This instance will then be re-used every time that dependency is requested.

At some point someone had refactored (that’s good!) one of the business classes (which were quite monolithic) and had extracted some of the functionality into that Helper class. Luckily this particular helper was only related to login, which explained why the failures only occurred during login in the load tests, so the impact was isolated.

The Solution

All of the business classes in the application were Transient scoped. This helper class was essentially a business class, but had been marked as Singleton for some reason. The simplest solution was to make it match the scoping of the other business classes and mark it as Transient too. This reduced the number of Login failures during the medium load test to 0 (yay!) which was good enough for the business.

The Better Solution

Of course, the code is still terrible underneath, and more subtle failures could still be lurking (can we be sure that every single time the ExecutionEngine is used that its only being used on the thread that it was created? not without adding thread affinity into the class itself), but you don’t always get time to fix underlying issues. As per my previous post, Champion you Code, normally I would fight pretty hard to fix the root cause of the problem (that goddamn ExecutionEngine). This time though…well the code had already been sold to someone who was going to develop it themselves and I wasn’t going to be working for that organisation for much longer, so I took the pragmatic approach and left it as it was. Granted, its essentially a tripwire for some future developer, which makes me sad, but you can’t always fix all the problems.

If given the opportunity I would probably change the way the ExecutionEngine is used at least, so that it isn’t as vulnerable to concurrency issues. The easiest way to do this would be to make the Execute method take a Command and return a DataSet, removing all of the state (except the connection string) from the class. That way it doesn’t matter how many threads attempt to Execute at the same time, they will all be isolated from each other. Not a small change, considering how many usages of the class in its current form existed, and the risk that that much change would introduce.

Summary

Singletons are dangerous. Well, I should say, Singletons that involve state in some way (either themselves, or with dependencies that involve state) are dangerous. If you go to mark something as being in Singleton scope, step away from the keyboard, go for a walk and think about it some more. There’s probably another way to do it. When using dependency injection its not always immediately obvious the impacts of making something a Singleton, so you have to be extremelycareful.

0 Comments

Fellow developers, this is a call to arms! Too often I see developers who immediately cave to external pressure to “just get it done”. “Just” is a dangerous word when it comes to software development anyway. Don’t do it! Stand up for your code, don’t compromise on quality just because someone is breathing down your neck. Take a few moments to think about the impact of what you are being asked to do, both to you (as the current developer) and to others (mostly future developers).

No-one wants to work inside crap code, but people seem to be more than happy enough to create it under pressure. Sure you might think that you’ll just fix it up next time, but I’ve found that that hardly ever happens.

Do you think that overbearing project manager is ever going to have to suffer through maintaining or refactoring code that “there wasn’t enough time to test” or that is unclear as a result of “we need to just get this done now, we’ll refactor it later”? Nope. The project managers job is to get the project to some previously decided definition of done and they may be more than willing to sacrifice fuzzy things like “code quality”, “readability” and sometimes even “test coverage” or “scalability” in exchange for things that they are directly measured against like “deadlines” and “fixed scope”.

The representatives of the business will fight hard for what they believe is important. As a developer you should fight just as hard for the code. If the business representative is fighting for the “just get it done” point of view, you should fight just as hard for quality, readability, maintainability, cleanliness, and all of those other things that good code has. Stand up for what you believe in and make sure that you’re creating code that is pleasant to work with, otherwise you're going to start hating your job, and nobody wants that to happen.

I’m not prescribing that you just do what you want. That’s silly. You can’t just spend 3 months coming up with the most perfect architecture for solving problem X, or you might be surprised when you come up for air just in time to get made redundant because the organisation doesn’t have any money left.

What I am prescribing is that you fight just as hard for the code as other people fight for other things (like deadlines, estimates, contract sign-off, scope negotiation, etc).

As far as I can see there are two approaches to championing your code.

Communicate

Never attribute to malice that which is adequately explained by stupidity.

Robert J Hanlon

You (as a professional developer) know why quality is important, but other people in the organization might not. They might not realise the impact of what they are (either implicitly or explicitly) asking you to do.

You need to communicate the importance of taking the time to create quality to the people who can actually make decisions. Of course, you need to be able to talk to those people in the language that they understand, so you need to be able to speak about things like “business impact”, “cost of ownership” and “employee productivity”. Its generally not a good idea to go to someone in Senior Management (even if they are technical) and start ranting about how the names of all of your classes and methods look like a foreign language because they are so terrible.

If your immediate superior doesn’t care, then go over their head until you reach someone who does. If you don’t find someone who cares, look for a better job.

Obviously be careful when doing this, as you could land yourself in some very awkward situations. At the very least, always be aware that you should show as much loyalty to your organisation as they would show you.

Do It Anyway

The conventional army loses if it does not win. The guerrilla wins if he does not lose.

Henry A Kissinger

If you feel like you are trying to communicate with a brick wall when it comes to talking about quality, you might need to engage in some “guerrilla craftsmanship”. Don’t explain exactly what you are doing in the code, and keep following good development practices. If you’re asked for estimates, include time for quality. Get buy-in from the rest of your fellow developers, create a shared coding standard. Implement code reviews. Ensure that code is appropriately covered by tests. To anyone outside the team, this is just what the team is doing. There’s no negotiation, you aren’t asking to develop quality solutions, you’re just doing it.

When that external pressure comes around again, and someone tries to get you to “just get it done”, resist. Don’t offer any additional information, this is just how long its going to take. If you are consistent with your approach to quality, you will start delivering regularly, and that external pressure will dissipate.

Be mindful, this is a dangerous route, especially if you are facing a micro-manager or someone who does not trust developers. As bad as it sounds, you may need to find someone to “distract” or “manage” that person, kind of like dangling some keys in front of a baby, while you do what you need to do.

Conclusion

Fight for your code.

No one else will.

0 Comments

I love Dependency Injection.

I’ve only really been doing it for the past year or so, but I’ve noticed that smart usage of dependency injection makes code more loosely coupled, easier to change and easier to test.

Doing dependency injection well is hard. I highly suggest reading Dependency Injection in .NET. Actually, read that book AND read Mark Seemans’ excellent blog as well.

That last bit about classes designed with dependency injection being easier to test is kind of correct. They are certainly easy to isolate (for the purposes of unit testing), but classes that are designed with dependency injection should have their dependencies supplied during object construction (Constructor Injection).

The downside of using Constructor Injection, especially combined with Test Driven Development, is that your constructor is probably going to change quite a lot as you are developing. This, of course, has an impact on your unit tests, as depending on how you are instantiating your object, you may have to change quite a number of lines of code every time you make a change.

Annoying.

Tests shouldn’t be onerous. Yes making a change to a constructor will probably have an impact on a test, but that impact should come out in the test results, not during the compile, because its the test results that show whether or not the impact of the change was meaningful. Also, getting hundreds of compiler errors just because you changed a constructor is kind of morale crushing.

First Attempt

The obvious solution to the problem is to factor out the construction of your object into a method in your test class.

[TestClass]
public class AccountsRepositoryTest
{
    [TestMethod]
    public void AccountsRepository_SearchByMember_ReturnsOnlyMemberAccounts()
    {
        var expectedMember = "285164";

        var accounts = new List<Account>
        {
            // .. bunch of accounts here
        };

        var accountsPersistenceSubstitute = Substitute.For<AccountsPersistence>();
        accountsPersistenceSubstitute.Retrieve().Returns(accounts);

        var target = CreateTarget(accountsPersistence: accountsPersistenceSubstitute);

        // .. rest of the test here
    }

    private AccountsRepository CreateTarget(AccountsPersistence persistence = null)
    {
        if (acccountsPersistence == null)
        {
            var accountsPersistence = Substitute.For<AccountsPersistence>();
        }

        var target = new AcountsRepository(accountsPersistence);
        return target;
    }
}

This is much better than riddling your tests with direct calls to the constructor, but its still an awful lot of code that I would prefer to not have to write (or maintain). It can start getting pretty onerous when your class has a few dependencies as well.

TheresGotToBeABetterWay

There’s got to be a better way!

Second Attempt

Well, there is. One of the reasons why Inversion of Control Containers exist is to help us construct our objects, and to allow us to change our constructors without having to change a bunch of code (yes there are many other reasons they exist, but creating object graphs is definitely one of the bigger ones).

Why not use an IoC container in the unit tests?

What I do now is:

[TestClass]
public class AccountsRepositoryTest
{
    [TestMethod]
    public void AccountsRepository_SearchByMember_ReturnsOnlyMemberAccounts()
    {
        var expectedMember = "285164";
        
        var accounts = new List<Account>
        {
            // .. bunch of accounts here
        }

        var accountsPersistenceSubstitute = Substitute.For<AccountsPersistence>();
        accountsPersistenceSubstitute.Retrieve().Returns(accounts);

        var kernel = CreateKernel();
        kernel.Rebind<AccountsPersistence>().ToConstant(accountsPersistenceSubstitute);        

        var target = kernel.Get<AccountsRespository>();

        // .. rest of test
    }

    private IKernel CreateKernel()
    {
        var kernel = new NSubstituteMockingKernel();
        return kernel;
    }
}

Much better. Leveraging the power of the Ninject IKernel, along with the NSubstitute MockingKernel extension allows me to only have to implement a small amount of code in each new test class (just the simple CreateKernel method). From the example its not immediately obvious what the benefits are, because I’ve had to write more code into the test method to deal with the kernel, but this approach really comes into its own when you have many dependencies (instead of just one) or your constructor is changing a lot.

Pros:

  • The test methods lose no expressiveness (they still state the dependencies they need control over and rebind them as necessary).
  • I’ve dropped all of the annoying boilerplate code that sets up substitutes for all of the dependencies that I don’t care about.
  • I don’t need to deal with a method in each test class that is essentially a surrogate constructor (which will need to change every time the constructor changes).

Cons:

  • I’ve hidden the pain that can come from having an class with many dependencies. This is a good pain, it tells you that something is wrong with your class and tests are one of the easiest places to feel it.
  • The test projects are now dependent on the IoC container.

I think its a worthy trade-off.

Special Kernels and Modules

One of the great things about Ninject is the ability to describe Modules (which contain binding information) and Kernels (which can load certain modules by default, and provide bindings of their own).

If you have certain bindings that need to be valid for your unit tests, or configured in some specific way, you can create a specific UnitTestModule that contains those bindings. I usually combine this with a UnitTestKernel, which just loads that UnitTestModule by default (just so I don’t have to manually load it in every CreateKernel method).

A good example of a use case for this is a project that I’m working on in my spare time. It is a WPF desktop application using the MVVM pattern, and makes use of TaskSchedulers to perform background work. I say TaskSchedulers plural because there are two, one for background threads and one for the main thread (to commit results to the View Model so that the UI will correctly refresh the bindings).

Unit testing code that involves multiple threads at its most basic level can be extremely difficult, especially if the background/foreground work is encapsulated well in the class.

This is where the UnitTestModule comes in. It provides a binding for the pair of TaskSchedulers (background and foreground) which is an implementation of a single threaded (or synchronous) TaskScheduler. This means that any background work happens synchronously, which makes the View Models much easier to test. You wouldn’t want to repeat this binding for every single View Model test class, so the UnitTestModule (and thus the UnitTestKernel) is the perfect place for it.

TaskSchedulers are great, precisely for the reason that you can provide your own implementations. I’ve used a CurrentThreadTaskScheduler and even a ManualTaskScheduler for the purposes of unit testing, and it really does make everything much easier to test. Of course, the implementations of TaskScheduler that are used in the real app need to be tested to, but that’s what integration testing is for.

Conclusion

Tests shouldn’t be something that causes you to groan in frustration every time you have to make a change. I find that using an IoC container in my unit tests removes at least one of those “groan” points, the constructor, and feels much cleaner.

0 Comments

I’m going to show my ignorance here for a second. Don’t get me wrong, its always there, I just don’t always show it.

I didn’t understanding what assembly binding redirection did until yesterday. I mean, I always saw app.config files being added/modified whenever I added a new package through NuGet, and it piqued my interest, but I never really had a reason to investigate further. It didn’t seem to be hurting (apart from some increased complexity in the config file) so I left it to do whatever the hell it was doing.

Yesterday (and the day before) I went through a solution with 58 projects with the intent of updating all of the Ninject references to the latest version. I’d added a reference to the Ninject.MockingKernel.Moq package to a test project

I know what you might be thinking, 58 projects! Yes I know that’s a lot. Its a bigger problem than I can solve right now though. Its on my list.

I know the second thing you might be thinking too. How many references to Ninject were there to update? A well factored application would only have 1 reference, in the entry point (or Composition Root). This particular solution had 4 entry points (3 web applications and a service), but references to Ninject were riddled throughout the rest of the projects as well, for a number of reasons:

  • Usage of the Service Locator anti-pattern, which exposed a static IKernel to, well, anything and everything.
  • Usage of Ninject attributes (property injection, marking which constructor to use for the kernel, etc).
  • Some projects had NinjectModules that mapped the interfaces in those projects to the concrete implementation (also in those projects).

IoC (and in particular Ninject) are amazing for a variety of reasons I won’t go into here, but its very easy to do it poorly.

Normally this would be a fairly painless process, just go into the NuGet package manager, check the Updates section, select Ninject and hit update. The problem was, not all of the references to Ninject had been added through NuGet. Some were hard references to a version of Ninject that had been downloaded and placed in a lib folder. Other references to Ninject had been added automatically as a result of adding NuGet packages with Ninject dependencies (like Agatha).

Of course, I didn’t realise that some of the references had been manually added, so I naively just used NuGet to update all the references it did know about, compiled, and then did a quick smoke test on the web applications.

Chaos.

Specifically this:

Could not load file or assembly ‘Ninject, Version=3.0.0.0, Culture=neutral, PublicKeyToken=c7192dc5380945e7’ or one of its dependencies. The located assembly’s manifest definition does not match the assembly reference. (Exception from HRESULT: 0x80131040).

Fair enough, that seems straightforward. Something is trying to load the old version and its going pear-shaped because the only Ninject.dll in the output directory is version 3.2.0.0. I went through the projects with a fine tooth comb, discovered that not all of the references had been updated to the latest version (and that some weren’t even using NuGet), fixed all that and tried again.

Still the same error.

I was sure that I had caught all the references, so I search through all of the .csproj files for every reference to 3.0.0.0 and couldn’t find any.

If you’re familiar with binding redirect, you can probably guess the thing that I did that I left out.

When I did the upgrade/installation, I was very wary of unintended changes to other parts of the system. One of the things that happened as a result of installing/updating through NuGet was the editing or addition of many app.config files for the projects in the solution. Specifically, the addition of the following chunk of XML to every project using Ninject.

<dependentAssembly> 
    <assemblyIdentity 
        name="Ninject" 
        publicKeyToken="c7192dc53809457" 
        culture="neutral" /> 
    <bindingRedirect 
        oldVersion="0.0.0.0-3.2.0.0" 
        newVersion="3.2.0.0" /> 
</dependentAssembly>

Here’s where the ignorance I mentioned earlier shows up. I thought that since I wasn’t going to be using version 3.0.0.0 of Ninject anywhere, I could safely ignore those changes, so I removed them.

After an embarrassing amount of time spent yelling at my development environment and searching the internet (“Could not load X” errors are surprisingly common) I finally realised that it was my actions that caused my issue.

I was right. None of my assemblies were using Ninject 3.0.0.0. However, the Agatha.Ninject.dll assembly did. Having no control over that assembly, I couldn’t upgrade its reference. Of course, NuGet had already thought of this and helpfully solved the problem for me…..until I just ignored its suggested configuration.

A bindingRedirect entry in an app.config file forces all assembly bindings for that assembly to redirect to the version specified. This works not just for dll’s that you have control over (i.e. your entry point and your projects) but also every assembly that you load and their dependencies as well.

Restoring the bindingRedirects for the entry points (the 3 web applications and the service) fixed the issue. I left it out of the rest of the projects because it seems like the sort of thing you want to set only at the entry point (kind of like IoC container configuration).

So in summary, never assume something exists for no reason, assume you just don’t understand the reason yet.