0 Comments

A very long time ago I wrote a post on this blog about interceptors.

The idea behind an interceptor is pretty straight forward; dynamically create a wrapper around some interface or class to either augment or alter its behaviour when used by the rest of the system, without actually having to implement the interface or override the class. For example, my original post was for an interceptor that slowed down requests heading to an API, simulating how that particular application would feel for a user with terrible internet.

I honestly haven’t touched the concept since, until recently that is.

I wanted to add some logging around usage of a third party API from our legacy application and conceptually the problem seemed like a perfect application for another interceptor. A quick implementation of a logging interceptor injected via Ninject and I’d have all the logging I could ever want, without having to mess around too much.

Reality had other ideas though.

Here’s Why I Came Here Tonight

Our legacy software is at that time in its life where it mostly just gets integrations. Its pretty feature complete as far as core functionality goes, so until the day we finally grant it Finis Rerum and it can rest, we look to add value to our users by integrating with third party services.

The most recent foray in this space integrated a payment provider into the software, which is quite useful considering its core value proposition is trust account management. From a technical point of view, the payment provider has an API and we wrote a client to access that API, with structured request and response objects. Pretty standard stuff.

As part of our development, we included various log events that allowed us to track the behaviour of parts of the system, mostly so that we could more easily support the application and get accurate metrics and feedback from users in the wild relating to performance. This is all well and good, but those events generally cover off combined parts of the application logic; for example, an operation that queries the local DB and then augments that information by calling into the third party API to display a screen to the user.

This makes it relatively easy to see when any users are experiencing performance issues, but it makes it hard to see whether or not the local DB, the actual programming logic or the internet call was the root cause.

An improvement to this would be to also log any outgoing API requests and their responses, along with the execution time. With that information we would be able to either blame or absolve the clients internet connection when it comes to performance questions.

Now, I’m an extremely lazy developer, so while we have a nice interface that I could implement to accomplish this (i.e. some sort of LoggingPaymentProviderClient), its got like twenty methods and I really don’t have the time, patience or motivation for that. Also, while its unlikely that the interface would change all that much over time, its still something of a maintenance nightmare.

Interceptors to the rescue!

But I Got The Feeling That Something Ain’t Right

As I explained in my original post all those years ago, the IInterceptor interface supplied by the Castle library allows you to implement logic via a proxy and slot it seamlessly into a call stack. Its usage is made easier by the presence of good dependency injection, but its definitely not required at all.

Thus enters the logging interceptor.

public class PaymentProviderClientMethodExecutionLoggingInterceptor : IInterceptor
{
    public PaymentProviderClientMethodExecutionLoggingInterceptor(ILogger logger)
    {
        _logger = logger;
    }

    private readonly ILogger _logger;

    public void Intercept(IInvocation invocation)
    {
        var stopwatch = Stopwatch.StartNew();
        try
        {
            invocation.Proceed();
            stopwatch.Stop();

            var log = new PaymentProviderMethodCallCompleted(invocation, stopwatch.Elapsed);
            _logger.Information(log);
        }
        catch (Exception ex)
        {
            stopwatch.Stop();

            var log = new PaymentProviderMethodCallFailed(invocation, stopwatch.Elapsed, ex);
            _logger.Warning(log);

            throw;
        }
    }
}

Its not an overly complicated class, and while its written to be specific, its actually quite generic.

Given a proxied class, all methods called on that class will be logged via Serilog, with the method name, its parameters and its return value (the structured logging being provided by the dedicated event classes).

Nothing ever works the first time though, and while I’m constantly being reminded of that, I’m always hopeful all the same. Denial is a powerful motivator.

The problem was that the IInterceptor interface is old enough that it doesn’t consider the existence of asynchronous methods. It does exactly what it says on the tin, starts a timer, proceeds with the method invocation and then because the method is asynchronous, immediately logs an event with the wrong execution time and no return value.

It has no idea that it has to wait for the invocation to complete because it thinks everything is synchronous.

Clowns To The Left Of Me, Jokers To The Right

This is where everything is going to get a little bit fuzzier than I would like, because I wrote this blog post before I had a working solution.

From what I can tell, the situation is quite complex.

The simplest solution appears to be to leverage the existing interface and simply check for the presence of a Task (or Task<T>) return value. If detected, append a continuation to that Task to perform the desired functionality. For me this would be a continuation on both faulted and success (and maybe cancelled?) that would log the completion of the method. It seems like it would work, but I do have some concerns about the scheduling of the continuation and how that makes the code harder to reason about.

Luckily, someone has already written a reusable library together that allows for asynchronous interceptors via a slightly different interface.

This is attractive because its code that I don’t have to write (remember, I’m lazy), but it not being built into the core Castle library does make me question its legitimacy. Surely if it was that critical the maintainers would have updated Castle.Core?

Regardless, I explored using the library first, but in order to use it I had to go on an adventure to upgrade a bunch of our Nuget dependencies (because it relied on the latest version of Castle.Core), which meant updates to Castle, Ninject and Ninject’s extension libraries. This caused knock on effects because the Ninject.MockingKernel.NSubstitute library was not available for .NET 4.5 (even though all the others were), so I had to temporarily copy that particular implementation into our codebase.

Once everything was compiling, a full test run showed some failing tests that weren’t failing before the library upgrades, so I kind of stopped there.

For now.

Conclusion

Unfortunately this is one of those blog posts that comes off feeling a little hollow. I didn’t actually get to my objective (seamless per method logging for a third-party dependency), but I did learn a lot on the way, so I think it was still useful to write about.

Probably should have waited a little longer though, I jumped the gun a bit.

Its not the only time in recent memory that asynchronous behaviour has made things more difficult than they would have been otherwise. In an unrelated matter, some of our automated tests have been flakey recently, and the core of the issue seems to be asynchronous behaviour that is either a.) not being correctly overridden to be synchronous for testing purposes or b.) not correctly being waited for before tests proceed.

Its not hard to write tests that are purely async of course (NUnit supports tests marked with the async keyword), but when you’re testing a view model and the commands are “synchronous” it gets a bit more challenging.

That’s probably a topic for another day though.

0 Comments

I have an interface.

I’m sure you have some interfaces as well. If you don’t use interfaces you’re probably doing it wrong anyway.

My interface is a bit of a monolith, which happens sometimes. It’s not so big that I can justify investing the time in splitting it apart, but it’s large enough that its hard to implement and just kind of feels wrong. I don’t need to implement this interface manually very often (yay for mocking frameworks like NSubstitute) so I can live with it for now. Not everything can be perfect alas.

This particular  interface allows a user to access a RESTful web service, and comes with some supporting data transfer objects.

I recently had the desire/need to see what would happen to the user experience of an application using this interface (and its default implementation) if the internet connection was slow, i.e. the calls to the service were delayed or timed out.

Obviously I could have implemented the interface with a wrapper and manually added slowdown/timeout functionality. As I mentioned previously though, there were enough methods in this interface that that sounded like a time consuming proposition. Not only that, but it would mean I would be tightly coupled to the interface, just to introduce some trivial slowdown code. If the interface changed, I would need to change my slowdown code. That’s bad, as the functionality of my code is distinctly separate from the functionality of the interface, and should be able to reuse that (admittedly trivial) code anywhere I like.

Plus I’m a lazy programmer. I’ll always go out of my way to write as little code as possible.

Aspect Oriented Programming

What I wanted to do was to be able to describe some behaviour about all of the calls on my interface, without actually having to manually write the code myself.

Luckily this concept has already been invented by people much smarter than me. Its generally referred to as Aspect Oriented Programming (AOP). There’s a lot more to AOP than just adding functionality unobtrusively to calls through an interface, but fundamentally it is about supporting cross cutting concerns (logging, security, throttling, auditing, etc) without having to rewrite the same code over and over again.

In this particular case I leveraged the IInterceptor interface supplied by the Castle.DynamicProxy framework. Castle.DynamicProxy is included in the Castle.Core package, and is part of the overarching Castle Project. It is a utility library for generating proxies for abstract classes and interfaces and is used by Ninject and NSubstitute, as well as other Dependency Injection and mocking/substitution frameworks.

Castle.DynamicProxy provides an interface called IInterceptor.

public interface IInterceptor
{
    void Intercept(IInvocation invocation);
}

Of course, that definition doesn’t make a lot of sense without the IInvocation interface (trimmed of all comments for brevity).

public interface IInvocation
{
    object[] Arguments { get; }
    Type[] GenericArguments { get; }
    object InvocationTarget { get; }
    MethodInfo Method { get; }
    MethodInfo MethodInvocationTarget { get; }
    object Proxy { get; }
    object ReturnValue { get; set; }
    Type TargetType { get; }
    object GetArgumentValue(int index);
    MethodInfo GetConcreteMethod();
    MethodInfo GetConcreteMethodInvocationTarget();
    void Proceed();
    void SetArgumentValue(int index, object value);
}

You can see from the above definition that the IInvocation provides information about the method that is being called, along with a mechanism to actually call the method (Proceed).

You can implement this interface to do whatever you want, so I implemented an interceptor that slowed down all method calls by some configurable amount. You can then use your interceptor whenever you create a proxy class.

public class DelayingInterceptor : IInterceptor
{
    private static readonly TimeSpan _Delay = TimeSpan.FromSeconds(5);

    public DelayingInterceptor(ILog log)
    {
        _Log = log;
    }

    private readonly ILog _Log;

    public void Intercept(IInvocation invocation)
    {
        _Log.DebugFormat("Slowing down invocation of [{0}] by [{1}] milliseconds.", invocation.Method.Name, _Delay.TotalMilliseconds);
        Thread.Sleep(_Delay);
        invocation.Proceed();
    }
}

Proxy classes are fantastic. Essentially they are automatically implemented wrappers for an interface, often with a concrete implementation to wrap supplied during construction. When you create a proxy you can choose to supply an interceptor that will be automatically executed whenever a method call is made on the interface.

This example code shows how easy it is to setup a proxy for the fictitious IFoo class, and delay all calls made to its methods by the amount described above.

IFoo toWrap = new Foo();

var generator = new ProxyGenerator();
var interceptor = new DelayingInterceptor(log4net.LogManager.GetLogger("TEST"));
var proxy = generator.CreateInterfaceProxyWithTarget(toWrap, interceptor);

As long as you are talking in interfaces (or at the very least abstract classes) you can do just about anything!

Stealthy Interception

If you use Ninject, it offers the ability to add interceptors to any binding automatically using the optional Ninject.Extensions.Interception library.

You still have to implement IInterceptor, but you don’t have to manually create a proxy yourself.

In my case, I wasn’t able to leverage Ninject (even though the application was already using it), as I was already using a Factory that had some logic in it. This stopped me from simply using Ninject bindings for the places where I was using the interface. I can see Ninjects support for interception being very useful though, now that I understand how interceptors work. In fact, since my slowdown interceptor is very generic, I could conceivably experiment with slowdowns at various levels in the application, from disk writes to background processes, just to see what happens. Its always nice to have that sort of power to see how your application will actually react when things are going wrong.

Other Ways

I’m honestly not entirely sure if Interceptors fit the classic definition of Aspect Oriented Programming. They do allow you to implement cross cutting concerns (like my slowdown), but I generally see AOP referred to in the context of code-weaving.

Code-weaving is where code is automatically added into your classes at compile time. You can use this to automatically add boilerplate code like null checking on constructor arguments as whatnot without having to write the code yourself. Just describe that you want the parameters to be null checked and the code will be added at compile time. I’m not overly fond of this approach personally, as I like having the code in source control represent reality. I can imagine using code-weaving might lead to situations where it is more difficult to debug the code because the source doesn’t line up with the compiled artefacts.

I don’t have any experience here, I’m just mentioning it for completeness.

Conclusion

In cases where you need to be able to describe some generic piece of code that occurs for all method calls of an interface, Interceptors are fantastic. They raise the level that you are coding at in my opinion, moving beyond writing code that directly tells the computer what to do and into describing the behaviour that you want. This leads to less code that needs to be maintained and less hard couplings the interfaces (as you would get if you implemented the wrapper yourself). Kind of like using an IoC container with your tests (enabling you to freely change your constructor without getting compile errors) you can freely change your interface and not have it impact your interceptors.

I’m already thinking of other ways in which I can leverage interceptors. One that immediately comes to mind is logging calls to the service, and timing how long they take, which is invaluable when investigating issues at the users end and for monitoring performance.

Interceptors provide great modular and decoupled way to accomplish certain cross cutting concerns, and I’m going to try and find more ways to leverage them in the future now that I know they exist.

You should to.

Unless you aren’t using interfaces.

Then you’ve got bigger problems.

0 Comments

Yeah, sorry about the title. They can’t all be winners.

Anyway, I tracked down an interesting bug a few weeks ago, and I thought that it might be worthwhile to discuss it here, so that I can see it later when the memory fades. Also it might help someone else, which is nice, I guess.

The Problem

There was a web application. Some performance/load testing was being completed (for the first time, after development has “completed”, because who needs an Agile approach, right?) and the results showed that there was an unreasonable amount of failures during login. Something in the realm of 10% of all login attempts under a medium load would fail.

The Cause

At the root, the bug involved this class:

public class ExecutionEngine
{
    public string ConnectionString { get; set; }
    public SqlCommand Command { get; set; }
    public DataSet Result { get; set; }

    public void Execute()
    {
        var conn = new SqlConnection(ConnectionString);
        Command.Connection = conn;
        var adapter = new SqlDataAdapter(Command);

        Result = new DataSet();
        adapter.Fill(Result);
    }
}

Pretty terrible all things considered. Weird way to use SqlCommands and DataSets, but okay.

A typical usage of the ExecutionEngine class was as follows:

public class Demonstration
{
    public Demonstration(ExecutionEngine engine)
    {
        _Engine = engine;
    }

    private readonly ExecutionEngine _Engine;

    public IEnumerable<string> GetTheThingsForAUser(string userId)
    {
        _Engine.Command.CommandText = "GetAListOfAllTheThings";
        _Engine.Command.CommandType = CommandType.StoredProcedure;

        _Engine.Command.Parameters.Clear();
        _Engine.Command.Parameters.AddWithValue("UserId", userId);

        _Engine.Execute();

        var allTheThings = new List<string>();
        foreach (DataRow thing in _Engine.Result.Tables[0].Rows)
        {
            allTheThings.Add((string)thing["Name"]);
        }

        return allTheThings;
    }
}

There were a LOT of usages like the demonstration class above (100+), jammed into one SUPER-class called “DataAccessLayer”. This “DAL” was a dependency of the business classes, which were used by the rest of the system. An instance of a business class would be instantiated as needed, which in turn would resolve its dependencies (using Ninject) and then be used to service the incoming request.

Given that I’ve already spoiled the ending by mentioning threading in the title of this post, you can probably guess that there was a threading problem. Well, there was.

The ExecutionEngine class is obviously not thread-safe. At any point in time if you have one instance of this class being used on multiple threads, you could conceivably get some very strange results. Best case would be errors. Worst case would be someone else’s data!To illustrate:

  1. Thread A enters GetTheThingsForAUser
  2. Thread A sets the command text and type to the appropriate values.
  3. Thread B enters GetTheThingsForAUser
  4. Thread A clears the existing parameters and adds its User Id
  5. Thread B clears the parameters and adds its User Id
  6. Thread A executes, grabs the result and returns it. Thread A just returned the values for a completely different user that it asked for, but has given no indicationof this!

At the very least, the developer who created the class had thought about thread-safety (or someone had thought about it later).

public class DataAccessLayerModule : NinjectModule
{
    public override void Load()
    {
        Bind<ExecutionEngine>().ToSelf().InThreadScope();
    }
}

For those of you unfamiliar with the Thread scope, it ensures that there is one instance of the class instantiated per thread, created at the time of dependency resolution. It adds thread affinity to classes that don’t otherwise have it, but ONLY during construction.

At least there will be only one instance of this created per thread, and a single thread isn’t going to be jumping between multiple method executions (probably, I’m not sure to be honest) so at least the lack of thread safety might not be an issue.

This was, of course, a red herring. The lack of thread-safety was EXACTLY the issue. It took an embarrassingly large amount of time for me to track down the root cause. I debugged, watched the business objects being instantiated and then watched the execution engine being injected into them…with the correct thread affinity. Only the latest version of this web application was experiencing the issue, so it had to have been a relatively recent change (although this particular project did have quite a long and…storied history).

The root issue turned out to be the following:

public class DataAccessLayerModule : NinjectModule
{
    public override void Load()
    {
        // Some bindings.

        Bind<ExecutionEngine>().ToSelf().InThreadScope();

        // Some more bindings.

        Bind<Helper>().To<DefaultHelper>().InSingletonScope();
    }
}

See that second binding there, the one that’s a Singleton? It had a dependency on the ExecutionEngine. This of course threw a gigantic spanner in the works, as an instance of the ExecutionEngine was no longer only being used on one thread at a time, leaving it wide open to concurrency issues (which is exactly what was happening).

If you’re unfamiliar with the Singleton scope, it basically means that only one instance of the class is going to be instantiated in the application. This instance will then be re-used every time that dependency is requested.

At some point someone had refactored (that’s good!) one of the business classes (which were quite monolithic) and had extracted some of the functionality into that Helper class. Luckily this particular helper was only related to login, which explained why the failures only occurred during login in the load tests, so the impact was isolated.

The Solution

All of the business classes in the application were Transient scoped. This helper class was essentially a business class, but had been marked as Singleton for some reason. The simplest solution was to make it match the scoping of the other business classes and mark it as Transient too. This reduced the number of Login failures during the medium load test to 0 (yay!) which was good enough for the business.

The Better Solution

Of course, the code is still terrible underneath, and more subtle failures could still be lurking (can we be sure that every single time the ExecutionEngine is used that its only being used on the thread that it was created? not without adding thread affinity into the class itself), but you don’t always get time to fix underlying issues. As per my previous post, Champion you Code, normally I would fight pretty hard to fix the root cause of the problem (that goddamn ExecutionEngine). This time though…well the code had already been sold to someone who was going to develop it themselves and I wasn’t going to be working for that organisation for much longer, so I took the pragmatic approach and left it as it was. Granted, its essentially a tripwire for some future developer, which makes me sad, but you can’t always fix all the problems.

If given the opportunity I would probably change the way the ExecutionEngine is used at least, so that it isn’t as vulnerable to concurrency issues. The easiest way to do this would be to make the Execute method take a Command and return a DataSet, removing all of the state (except the connection string) from the class. That way it doesn’t matter how many threads attempt to Execute at the same time, they will all be isolated from each other. Not a small change, considering how many usages of the class in its current form existed, and the risk that that much change would introduce.

Summary

Singletons are dangerous. Well, I should say, Singletons that involve state in some way (either themselves, or with dependencies that involve state) are dangerous. If you go to mark something as being in Singleton scope, step away from the keyboard, go for a walk and think about it some more. There’s probably another way to do it. When using dependency injection its not always immediately obvious the impacts of making something a Singleton, so you have to be extremelycareful.

0 Comments

I love Dependency Injection.

I’ve only really been doing it for the past year or so, but I’ve noticed that smart usage of dependency injection makes code more loosely coupled, easier to change and easier to test.

Doing dependency injection well is hard. I highly suggest reading Dependency Injection in .NET. Actually, read that book AND read Mark Seemans’ excellent blog as well.

That last bit about classes designed with dependency injection being easier to test is kind of correct. They are certainly easy to isolate (for the purposes of unit testing), but classes that are designed with dependency injection should have their dependencies supplied during object construction (Constructor Injection).

The downside of using Constructor Injection, especially combined with Test Driven Development, is that your constructor is probably going to change quite a lot as you are developing. This, of course, has an impact on your unit tests, as depending on how you are instantiating your object, you may have to change quite a number of lines of code every time you make a change.

Annoying.

Tests shouldn’t be onerous. Yes making a change to a constructor will probably have an impact on a test, but that impact should come out in the test results, not during the compile, because its the test results that show whether or not the impact of the change was meaningful. Also, getting hundreds of compiler errors just because you changed a constructor is kind of morale crushing.

First Attempt

The obvious solution to the problem is to factor out the construction of your object into a method in your test class.

[TestClass]
public class AccountsRepositoryTest
{
    [TestMethod]
    public void AccountsRepository_SearchByMember_ReturnsOnlyMemberAccounts()
    {
        var expectedMember = "285164";

        var accounts = new List<Account>
        {
            // .. bunch of accounts here
        };

        var accountsPersistenceSubstitute = Substitute.For<AccountsPersistence>();
        accountsPersistenceSubstitute.Retrieve().Returns(accounts);

        var target = CreateTarget(accountsPersistence: accountsPersistenceSubstitute);

        // .. rest of the test here
    }

    private AccountsRepository CreateTarget(AccountsPersistence persistence = null)
    {
        if (acccountsPersistence == null)
        {
            var accountsPersistence = Substitute.For<AccountsPersistence>();
        }

        var target = new AcountsRepository(accountsPersistence);
        return target;
    }
}

This is much better than riddling your tests with direct calls to the constructor, but its still an awful lot of code that I would prefer to not have to write (or maintain). It can start getting pretty onerous when your class has a few dependencies as well.

TheresGotToBeABetterWay

There’s got to be a better way!

Second Attempt

Well, there is. One of the reasons why Inversion of Control Containers exist is to help us construct our objects, and to allow us to change our constructors without having to change a bunch of code (yes there are many other reasons they exist, but creating object graphs is definitely one of the bigger ones).

Why not use an IoC container in the unit tests?

What I do now is:

[TestClass]
public class AccountsRepositoryTest
{
    [TestMethod]
    public void AccountsRepository_SearchByMember_ReturnsOnlyMemberAccounts()
    {
        var expectedMember = "285164";
        
        var accounts = new List<Account>
        {
            // .. bunch of accounts here
        }

        var accountsPersistenceSubstitute = Substitute.For<AccountsPersistence>();
        accountsPersistenceSubstitute.Retrieve().Returns(accounts);

        var kernel = CreateKernel();
        kernel.Rebind<AccountsPersistence>().ToConstant(accountsPersistenceSubstitute);        

        var target = kernel.Get<AccountsRespository>();

        // .. rest of test
    }

    private IKernel CreateKernel()
    {
        var kernel = new NSubstituteMockingKernel();
        return kernel;
    }
}

Much better. Leveraging the power of the Ninject IKernel, along with the NSubstitute MockingKernel extension allows me to only have to implement a small amount of code in each new test class (just the simple CreateKernel method). From the example its not immediately obvious what the benefits are, because I’ve had to write more code into the test method to deal with the kernel, but this approach really comes into its own when you have many dependencies (instead of just one) or your constructor is changing a lot.

Pros:

  • The test methods lose no expressiveness (they still state the dependencies they need control over and rebind them as necessary).
  • I’ve dropped all of the annoying boilerplate code that sets up substitutes for all of the dependencies that I don’t care about.
  • I don’t need to deal with a method in each test class that is essentially a surrogate constructor (which will need to change every time the constructor changes).

Cons:

  • I’ve hidden the pain that can come from having an class with many dependencies. This is a good pain, it tells you that something is wrong with your class and tests are one of the easiest places to feel it.
  • The test projects are now dependent on the IoC container.

I think its a worthy trade-off.

Special Kernels and Modules

One of the great things about Ninject is the ability to describe Modules (which contain binding information) and Kernels (which can load certain modules by default, and provide bindings of their own).

If you have certain bindings that need to be valid for your unit tests, or configured in some specific way, you can create a specific UnitTestModule that contains those bindings. I usually combine this with a UnitTestKernel, which just loads that UnitTestModule by default (just so I don’t have to manually load it in every CreateKernel method).

A good example of a use case for this is a project that I’m working on in my spare time. It is a WPF desktop application using the MVVM pattern, and makes use of TaskSchedulers to perform background work. I say TaskSchedulers plural because there are two, one for background threads and one for the main thread (to commit results to the View Model so that the UI will correctly refresh the bindings).

Unit testing code that involves multiple threads at its most basic level can be extremely difficult, especially if the background/foreground work is encapsulated well in the class.

This is where the UnitTestModule comes in. It provides a binding for the pair of TaskSchedulers (background and foreground) which is an implementation of a single threaded (or synchronous) TaskScheduler. This means that any background work happens synchronously, which makes the View Models much easier to test. You wouldn’t want to repeat this binding for every single View Model test class, so the UnitTestModule (and thus the UnitTestKernel) is the perfect place for it.

TaskSchedulers are great, precisely for the reason that you can provide your own implementations. I’ve used a CurrentThreadTaskScheduler and even a ManualTaskScheduler for the purposes of unit testing, and it really does make everything much easier to test. Of course, the implementations of TaskScheduler that are used in the real app need to be tested to, but that’s what integration testing is for.

Conclusion

Tests shouldn’t be something that causes you to groan in frustration every time you have to make a change. I find that using an IoC container in my unit tests removes at least one of those “groan” points, the constructor, and feels much cleaner.