0 Comments

Ahhhh automated tests. I first encountered the concept of automated tests 6-7 years ago via a colleague experimenting with NUnit. I wasn’t overly impressed at first. After all, your code should just work, you shouldn’t need to prove it. Its safe to say I was a bad developer.

Luckily logic prevailed, and I soon came to accept the necessity of writing tests to improve the quality of a piece of software. Its like double-entry book keeping, the tests provide checks and balances for your code, giving you more than one indicator as to whether or not it is working as expected.

Notice that I didn’t say that they prove your code is doing what it is supposed to. In the end tests are still written by a development team, and the team can still misunderstand what is actually required. They aren’t some magical silver bullet that solves all of your problems, they are just another tool in the tool box, albeit a particularly useful one.

Be careful when writing your tests. Its very easily to write tests that actually end up making your code less able to respond to change. It can be very disheartening to go to change the signature of a constructor and to hit hundreds of compiler errors because someone helpfully wrote 349 tests that all use the constructor directly. I’ve written about this specific issue before, but in more general terms you need to be very careful about writing tests that hurt your codebase instead of helping it.

I’m going to assume that you are writing tests. If not, you’re probably doing it wrong. Unit tests are a good place to start for most developers, and I recommend The Art of Unit Testing by Roy Osherove.

I like to classify my tests into 3 categories. Unit, Integration and Functional.

Unit

Unit tests are isolationist, kind of like a paranoid survivalist. They don’t rely on anyone or anything, only themselves. They should be able to be run without instantiating any class but themselves, and should be very fast. They tend to exercise specific pieces of functionality, often at a very low level, although they can also encompass verifying business logic. This is less likely though, as business logic typically involves multiple classes working together to accomplish a higher level goal.

Unit tests are the lowest value tests for verifying that your piece of software works from an end-users point of view, purely because of their isolationist stance. Its easily plausible to have an entire suite of hundreds of unit tests passing and still have a completely broken application (its unlikely though).

Their true value comes from their speed and their specificity.

Typically I run my unit tests all the time, as part of a CI (Continuous Integration) environment, which is only possible if they run quickly, to tighten the feedback loop. Additionally, if a unit test fails, the failure should be specific enough that it is obvious why the failure occurred (and where it occurred).

I like to write my unit tests in the Visual Studio testing framework, augmented by FluentAssertions (to make assertions clearer), NSubstitute (for mocking purposes) and Ninject (to avoid creating a hard dependency on constructors, as previously described).

Integration

Integration tests involve multiple components working in tandem.

Typically I write integration tests to run at a level just below the User Interface and make them purely programmatic. They should walk through a typical user interaction, focusing on accomplishing some goal, and then checking that the goal was appropriately accomplished (i.e. changes were made or whatnot).

I prefer integration tests to not have external dependencies (like databases) but sometimes that isn’t possible (you don’t want to mock an entire API for example) so its best if they operate in a fashion that isn’t reliant on external state.

This means that if you’re talking to an API for example, you should be creating, modifying and deleting appropriate records for your tests within the tests themselves. The same can be said for a database, create the bits you want, clean up after yourself.

Integration tests are great for indicating whether or not multiple components are working together as expected, and for verifying that at whatever programmable level you have introduced the user can accomplish their desired goals.

Often integration tests like I have described above are incredibly difficult to write on a system that does not already have them. This is because you need to accommodate the necessary programmability layer into the system design for the tests. This layer has to exist because historically programmatically executing most UI layers has proven to be problematic at best (and impossible at worst).

The downside is that they are typically much, much slower than unit tests, especially if they are dependent on external resources. You wouldn’t want to run them as part of your CI, but you definitely want to run them regularly (at least nightly, but I like midday and midnight) and before every release candidate.

I like to write my Integration tests in the same testing framework as my unit tests, still using FluentAssertions and Ninject, with as little usage of NSubstitute as possible.

Functional

Functional tests are very much like integration tests but they habe one key difference, they execute on top of whatever layer the user typically interacts with. Whether that is some user interface framework (WinForms, WPF) or a programmatically accessible API (like ASP.NET Web API), the tests focus on automating normal user actions as the user would typically perform them, with the assistance of some automation framework.

I’ll be honest, I’ve had the least luck with implementing these sorts of tests, because the technologies that I’ve personally used the most (CodedUI) have proven to be extremely unreliable. Functional tests written on top of a public facing programmable layer (like an API) I’ve had a lot more luck with, unsurprisingly.

The worst outcome of a set of tests are regular, unpredictable failures that have no bearing on whether or not the application is actually working from the point of view of the user. Changing the names of things or just text displayed on the screen can lead to all sorts of failures in automated functional tests. You have to be very careful to use automation friendly meta information (like automation IDs) and to make sure that those pieces of information don’t change without good reason.

Finally, managing automated functional tests can be a chore, as they are often quite complicated. You need to manage this code (and it is code, so it needs to be treated like a first class citizen) as well, if not better than your actual application code. Probably better, because if you let it atrophy, it will very quickly become useless.

Regardless, functional tests can provide some amount of confidence that your application is actually working and can be used. Once implemented (and maintained) they are far more repeatable than someone performing a set of steps manually.

Don’t think that I think manual testers are not useful in a software development team. Quite the contrary. I think that they should be spending their time and applying their experience to more worthwhile problems, like exploratory testing as opposed to simply being robots following a script. That's why we have computers after all.

I have in the past used CodedUI to write functional tests for desktop applications, but I can’t recommend it. I’ve very recently started using TestComplete, and it seems to be quite good. I’ve heard good things about Selenium, but have never used it myself.

Naming

Your tests should be named clearly. The name should communicate the situation and the expected outcome.

For unit tests I like to use the following convention:

[CLASS_NAME]_[CLASS_COMPONENT]_[DESCRIPTION_OF_TEST]

An example of this would be:

DefaultConfigureUsersViewModel_RegisterUserCommand_WhenNewRegisteredUsernameIsEmptyCommandIsDisabled

I like to use the class name and class component so that you can easily see exactly where the test is. This is important when you are viewing test results in an environment that doesn't support grouping or sorting (like in the text output from your tests on a build server or in an email or something).

The description should be easily readable, and should confer to the reader an indication of the situation (When X) and the expected outcome.

For integration tests I tend to use the following convention:

I_[FEATURE]_[DESCRIPTION_OF_TEST]

An example of this would be:

I_UserManagement_EndUserCanEnterTheDetailsOfAUserOfTheSystemAndRegisterThemForUseInTheRestOfTheApplication

As I tend to write my integration tests using the same test framework as the unit tests, the prefix is handy to tell them apart at a glance.

Functional tests are very similar to integration tests, but as they tend to be written in a different framework the prefix isn't necessary. As long as they have a good, clear description.

There are other things you can do to classify tests, including using the [TestCategory] attribute (in MSTest at least), but I find good naming to be more useful than anything else.

Organisation

My experience is mostly relegated to C# and the .NET framework (with bits and pieces of other things), so when I speak of organisation, I’m talking primarily about solution/project structures in Visual Studio.

I like to break my tests into at least 3 different projects.

[COMPONENT].Tests
[COMPONENT].Tests.Unit
[COMPONENT].Tests.Integration

The root tests project is to contain any common test utilities or other helpers that are used by the other two projects, which should be self explanatory.

Functional tests tend to be written in a different frameowkr/IDE altogether, but if you’re using the same language/IDE, the naming convention to follow for the functional tests should be obvious.

Within the projects its important to name your test classes to match up with your actual classes, at least for unit tests. Each unit test class should be named the same as the actual class being tested, with a suffix of UnitTests. I like to do a similar thing with IntegrationTests, except the name of the class is replaced with the name of the feature (i.e. UserManagementIntegrationTests). I find that a lot of the time integration tests tend to

Tying it All Together

Testing of one of the most powerful tools in your arsenal, having a major impact on the quality of your code. And yet, I find that people don’t tend to give it a lot of thought.

The artefacts created for testing should be treated with the same amount of care and thoughtfulness as the code that is being tested. This includes things like having a clear understanding of the purpose and classification of a test, naming and structure/position.

I know that most of the above seems a little pedantic, but I think that having a clear convention to follow is important so that developers can focus their creative energies on the important things, like solving problems specific to your domain. If you know where to put something and approximately what it looks like, you reduce the cognitive load in writing tests, which in turn makes them easier to write.

I like it when things get easier.