0 Comments

I have an interface.

I’m sure you have some interfaces as well. If you don’t use interfaces you’re probably doing it wrong anyway.

My interface is a bit of a monolith, which happens sometimes. It’s not so big that I can justify investing the time in splitting it apart, but it’s large enough that its hard to implement and just kind of feels wrong. I don’t need to implement this interface manually very often (yay for mocking frameworks like NSubstitute) so I can live with it for now. Not everything can be perfect alas.

This particular  interface allows a user to access a RESTful web service, and comes with some supporting data transfer objects.

I recently had the desire/need to see what would happen to the user experience of an application using this interface (and its default implementation) if the internet connection was slow, i.e. the calls to the service were delayed or timed out.

Obviously I could have implemented the interface with a wrapper and manually added slowdown/timeout functionality. As I mentioned previously though, there were enough methods in this interface that that sounded like a time consuming proposition. Not only that, but it would mean I would be tightly coupled to the interface, just to introduce some trivial slowdown code. If the interface changed, I would need to change my slowdown code. That’s bad, as the functionality of my code is distinctly separate from the functionality of the interface, and should be able to reuse that (admittedly trivial) code anywhere I like.

Plus I’m a lazy programmer. I’ll always go out of my way to write as little code as possible.

Aspect Oriented Programming

What I wanted to do was to be able to describe some behaviour about all of the calls on my interface, without actually having to manually write the code myself.

Luckily this concept has already been invented by people much smarter than me. Its generally referred to as Aspect Oriented Programming (AOP). There’s a lot more to AOP than just adding functionality unobtrusively to calls through an interface, but fundamentally it is about supporting cross cutting concerns (logging, security, throttling, auditing, etc) without having to rewrite the same code over and over again.

In this particular case I leveraged the IInterceptor interface supplied by the Castle.DynamicProxy framework. Castle.DynamicProxy is included in the Castle.Core package, and is part of the overarching Castle Project. It is a utility library for generating proxies for abstract classes and interfaces and is used by Ninject and NSubstitute, as well as other Dependency Injection and mocking/substitution frameworks.

Castle.DynamicProxy provides an interface called IInterceptor.

public interface IInterceptor
{
    void Intercept(IInvocation invocation);
}

Of course, that definition doesn’t make a lot of sense without the IInvocation interface (trimmed of all comments for brevity).

public interface IInvocation
{
    object[] Arguments { get; }
    Type[] GenericArguments { get; }
    object InvocationTarget { get; }
    MethodInfo Method { get; }
    MethodInfo MethodInvocationTarget { get; }
    object Proxy { get; }
    object ReturnValue { get; set; }
    Type TargetType { get; }
    object GetArgumentValue(int index);
    MethodInfo GetConcreteMethod();
    MethodInfo GetConcreteMethodInvocationTarget();
    void Proceed();
    void SetArgumentValue(int index, object value);
}

You can see from the above definition that the IInvocation provides information about the method that is being called, along with a mechanism to actually call the method (Proceed).

You can implement this interface to do whatever you want, so I implemented an interceptor that slowed down all method calls by some configurable amount. You can then use your interceptor whenever you create a proxy class.

public class DelayingInterceptor : IInterceptor
{
    private static readonly TimeSpan _Delay = TimeSpan.FromSeconds(5);

    public DelayingInterceptor(ILog log)
    {
        _Log = log;
    }

    private readonly ILog _Log;

    public void Intercept(IInvocation invocation)
    {
        _Log.DebugFormat("Slowing down invocation of [{0}] by [{1}] milliseconds.", invocation.Method.Name, _Delay.TotalMilliseconds);
        Thread.Sleep(_Delay);
        invocation.Proceed();
    }
}

Proxy classes are fantastic. Essentially they are automatically implemented wrappers for an interface, often with a concrete implementation to wrap supplied during construction. When you create a proxy you can choose to supply an interceptor that will be automatically executed whenever a method call is made on the interface.

This example code shows how easy it is to setup a proxy for the fictitious IFoo class, and delay all calls made to its methods by the amount described above.

IFoo toWrap = new Foo();

var generator = new ProxyGenerator();
var interceptor = new DelayingInterceptor(log4net.LogManager.GetLogger("TEST"));
var proxy = generator.CreateInterfaceProxyWithTarget(toWrap, interceptor);

As long as you are talking in interfaces (or at the very least abstract classes) you can do just about anything!

Stealthy Interception

If you use Ninject, it offers the ability to add interceptors to any binding automatically using the optional Ninject.Extensions.Interception library.

You still have to implement IInterceptor, but you don’t have to manually create a proxy yourself.

In my case, I wasn’t able to leverage Ninject (even though the application was already using it), as I was already using a Factory that had some logic in it. This stopped me from simply using Ninject bindings for the places where I was using the interface. I can see Ninjects support for interception being very useful though, now that I understand how interceptors work. In fact, since my slowdown interceptor is very generic, I could conceivably experiment with slowdowns at various levels in the application, from disk writes to background processes, just to see what happens. Its always nice to have that sort of power to see how your application will actually react when things are going wrong.

Other Ways

I’m honestly not entirely sure if Interceptors fit the classic definition of Aspect Oriented Programming. They do allow you to implement cross cutting concerns (like my slowdown), but I generally see AOP referred to in the context of code-weaving.

Code-weaving is where code is automatically added into your classes at compile time. You can use this to automatically add boilerplate code like null checking on constructor arguments as whatnot without having to write the code yourself. Just describe that you want the parameters to be null checked and the code will be added at compile time. I’m not overly fond of this approach personally, as I like having the code in source control represent reality. I can imagine using code-weaving might lead to situations where it is more difficult to debug the code because the source doesn’t line up with the compiled artefacts.

I don’t have any experience here, I’m just mentioning it for completeness.

Conclusion

In cases where you need to be able to describe some generic piece of code that occurs for all method calls of an interface, Interceptors are fantastic. They raise the level that you are coding at in my opinion, moving beyond writing code that directly tells the computer what to do and into describing the behaviour that you want. This leads to less code that needs to be maintained and less hard couplings the interfaces (as you would get if you implemented the wrapper yourself). Kind of like using an IoC container with your tests (enabling you to freely change your constructor without getting compile errors) you can freely change your interface and not have it impact your interceptors.

I’m already thinking of other ways in which I can leverage interceptors. One that immediately comes to mind is logging calls to the service, and timing how long they take, which is invaluable when investigating issues at the users end and for monitoring performance.

Interceptors provide great modular and decoupled way to accomplish certain cross cutting concerns, and I’m going to try and find more ways to leverage them in the future now that I know they exist.

You should to.

Unless you aren’t using interfaces.

Then you’ve got bigger problems.

0 Comments

I should have posted about this last week, but I got so involved in talking about automating my functional tests that I forgot all about it, so my apologies, but this post is a little stale. I hope that someone still finds it useful though, even as an opinion piece.

Automating the functional tests isn’t going so well actually, as I’m stuck on actually executing TestExecute in the virtual environment. Its dependent on there being a desktop for it to interact with (no doubt for the purposes of automation, like mouse clicks and send keys and stuff) and I’m executing it remotely from a different machine on a machine that is not guaranteed to have any available interactive user sessions. Its very frustrating.

Developer, Developer, Developer was hosted at QUT on Saturday 6 December, and it was great. This post is primarily to give a shoutout to everyone that I saw speak on the day, as well as some of my thoughts about the content. Its not a detailed breakdown of everything that I learned, just some quick notes.

First up, the conference would not have been possible without the sponsors, which I will mention here because they are awesome. Octopus Deploy, DevExpress and CampaignMonitor were the main sponsors, with additional sponsorship from Readify, SSW, Telerik + more. The whole day only cost attendees $30 each, and considering we had 2 massive lecture theatres and 1 smaller room at QUT for the entire day, food and drinks and then prizes at the end, the sponsors cannot be thanked enough.

Octopussy

The first talk of the day (the keynote) was from Paul Stovell, the Founder of Octopus Deploy.

Paul talked a bit about the origins of Octopus Deploy through to where the company (and its flagship product) are now, and then a little bit about where they are heading.

I found it really interesting listening to Paul speak about his journey evolving Octopus Deploy into something people actually wanted to pay money for. Paul described how Octopus was developed, how the company grew from just him to 5+ people, their first office (where they are now) and a number of difficulties he had along the way as he himself evolved from a developer to a managing director. I’ve been reading Paul’s blog for a while now, so there wasn’t a huge amount of new information, but it was still useful to see how everything fit together and to hear Paul himself talk about it.

I don’t think I will ever develop something that I could turn into a business like that, but its nice to know that it is actually possible.

A big thank you to Paul for his presentation, and to Octopus Deploy for their sponsorship of the event.

Microservices

Mehdi Khalili presented the second talk that I attended, and it was about microservices. Everyone seems to be talking about microservices now (well maybe just the people I talk to), and to be honest, I’d almost certainly fail to describe them within the confines of this blurb, so I’ll just leave a link here to Martin Fowlers great article on them. Its a good read, if a little heavy.

Long story short, its a great idea but its super hard to do it right. Like everything.

Mehdi had some really good lessons to share from implementing the pattern in reality, including things like making sure your services are robust in the face of failure (using patterns like Circuit Breaker) and ensuring that you have a realistic means of tracking requests as they pass through multiple services.

Mehdi is pretty awesome and well prepared, so his slides are available here.

I really should have written this blog post sooner, because I can’t remember a lot of concrete points from Mehdi’s talk, apart from the fact that it was informative while not being ridiculously eye-opening (I had run across the concepts and lessons before either through other talks or blog posts). Still, well worth attending and a big thank you to Mehdi for taking the time to put something together and present it to the community.

Microservices 2, Electric Boogaloo

I like Damian Maclennan, he seems like the kind of guy who isn’t afraid to tell you when you’re shit, but also never hesitates to help out if you need it. I respect that attitude.

Damian followed Mehdi’s talk on microservices, with another talk on microservices. I’ve actually seen Damian (and Andrew Harcourt) talk about microservices before, at the Brisbane Azure User Group in October, so I contemplated not going to this talk (and instead going to see William Tulloch tell me why I shouldn’t say fuck in front of the client). In the end I decided to attend this one, and I was glad that I did.

Damian’s talk provided a good contrast to Mehdi’s, with a greater focus on a personal experience that he had implementing microservices. He talked about a fictional project that he had been a part of for a company called “Pizza Brothers” and did a great walkthrough of the state of the system at the point where he came onto the project to rescue it, and how it changed. He talked and how he (and the rest of the team) slowly migrated everything into a Service Bus/Event based microservice architecture, and how that dealt with the problems of the existing system and why.

He was clear to emphasise that the whole microservices pattern isn’t something that you implement in a weekend, and that if you have a monolithic system, its going to take a long time to change it for the better. Its not an easy knot to unpick and it takes a lot of effort and discipline to do right.

I think I appreciate these sorts of talks (almost like case studies) more than any other sort, as they give the context behind the guidelines and tips. I find that helps me to apply the lessons in the real world.

Another big thank you to Damian for taking the time to do this.

Eventing, Not Just for Services

Daniel Little was the first presentation after lunch. He spoke about decoupling your domain model from the underlying persistence, which is typically a database.

The concepts that Daniel presented were very interesting. He took the event based design sometimes used in microservices, and used that to disconnect the domain model from the underlying database. The usage of events allowed the domain to focus on actual domain logic, and let something else worry about the persistence, without having to deal with duplicate classes or dumbing everything down so that database could understand it.

I think this sort of pattern has a lot of value, as I often struggle with persistence concerns leaking into an implementation and muddying the waters of the domain. I hadn’t actually considered approaching the decoupling problem with this sort of solution, so the talk was very valuable to me.

Kudos to Daniel for his talk.

Cmon Guys, UX Is A Thing You Can Do

This one was a pair presentation from Juan Ojeda and Jim Pelletier from Kiandra IT. Juan is a typical user experience (UX) guy, but Jim is a developer who started doing more UX after working with Juan. Jim’s point of view was a bit different than the normal UX stuff you see, which was nice.

I think developers tend to gloss over the user experience in favour of interesting technical problems, and the attendance at this talk only reinforced that opinion. There weren’t many people present, which was a shame because I think the guys gave some interesting information about making sure that you always keep the end-user in mind whenever you develop software, and presented some great tools for accomplishing that.

User experience seems to be one of those things that developers are happy to relegate to a “UI guy”, which I find to be very un-agile, because it reduces the share responsibility of the team. Sure, there’s going to be people with expertise in the area, but we shouldn’t shy away from problems in that space, as they are just as interesting to solve as the technical ones. Even if they do involve people instead of bits and bytes.

Juan and Jim talked about some approaches to UX, including using actual users in your design (kind of like personas) and measuring the impact and usage of your applications. They briefly touched on some ways to include UX into Agile methodologies and basically just reinforced how I felt about user experience and where it fits in into the software development process.

Thanks to Juan and Jim for presenting.

Security is Terrifying

The second last talk was by far the most eye-opening. OJ Reeves did a great presentation on how we are all doomed because none of our computers are secure.

It made me never want to connect my computer to a network ever again. I might not even turn it on. Its just not safe.

Seriously, this was probably the most entertaining and generally awesome talk of the day. It helps that OJ himself exudes an excitement for this sort of stuff, and his glee at compromising a test laptop (and then the things accessible from that laptop) was a joy to behold.

OJ did a fantastic demo where he used an (at the time unpatched) exploit in Internet Explorer (I can’t remember the version sorry) and Cross Site Scripting (XSS) to gain administrative access over the computer. He hid his intrusion by attaching the code he was executing to the memory and execution space of explorer! I didn’t even know that was possible. He then used his access to do all sorts of things, like take photos with the webcam, copy the clipboard, keylog and more dangerously, pivot his access to other machines on the network of the compromised machine that were not originally accessible from outside of the network (no external surface).

I didn’t take anything away from the talk other than terror and that there exists a tool called Metasploit and Meterpreter which I should probably investigate one day. Security is one of those areas that I don’t most developers spend enough time thinking about, and yet its one with some fairly brutal downsides if you mess it up.

You’re awesome OJ. Please keep terrifying muppets.

So You Want to be a Consultant

Damien McLennan’s second talk for the day was about things that he has learnt while working at Readify as a consultant (of various levels of seniority), before he moved to Octopus Deploy to be the CTO there.

I had actually recently applied and was rejected for a position at Readify *shakes fist*, so it was interesting hearing about the kinds of gigs that he had dealt with, and the lessons he learnt.

To go into a little more detail about my Readify application, I made it to the last step in their interview process (which consists of Coding Test, Technical Interview, Culture Interview and then Interview with the Regional Manager) but they decided not to offer me the position. In the end I think they made the right decision, because I’m not sure if I’m cut out for the world of consulting at this point in my career, but I would have appreciated more feedback on the why, so that I could use it to improve further.

Damian had a number of lessons that he took away from his time consulting, which he presented in his typical humorous fashion. Similar to his talk on microservices earlier in the day, I found that the context around Damian’s lessons learnt was the most valuable part of the talk, although don’t get me wrong, the lessons themselves were great. It turns out that most of the problems you have to deal with as a consultant are not really technical problems (although there are plenty of those) and are instead people issues. An organisation might think they have a technical problem, but its more likely that they have something wrong with their people and the way they interact.

Again this is another case where I wish I had of taken actual notes instead of just enjoying the talk, because then I would be able to say something more meaningful here other than “You should have been there.”

I’ve already thanked Damian above, but I suppose he should get two for doing two talks. Thanks Damian!

Conclusion

DDD is a fantastic community run event that doesn’t try to push any agenda other than “you can be better”, which is awesome. Sure it has sponsors, but they aren’t in your face all the time, and the focus really is on the talks. I’ve done my best to summarise how I felt about the talks that I attended above, but obviously its no substitute for attending them. I’ve linked to the slides or videos where possible, but not a lot is available just yet. SSW was recording a number of the talks I went too, so you might even see my bald head in the audience when they publish them (they haven’t published them yet).

I highly recommend that you attend any and all future DDD’s until the point whereby it collapses under the weight of its own awesomeness into some sort of singularity. At that stage you won’t have a choice any longer, because its educational pull will be so strong.

Might as well accept your fate ahead of time.

0 Comments

Ahhhh automated tests. I first encountered the concept of automated tests 6-7 years ago via a colleague experimenting with NUnit. I wasn’t overly impressed at first. After all, your code should just work, you shouldn’t need to prove it. Its safe to say I was a bad developer.

Luckily logic prevailed, and I soon came to accept the necessity of writing tests to improve the quality of a piece of software. Its like double-entry book keeping, the tests provide checks and balances for your code, giving you more than one indicator as to whether or not it is working as expected.

Notice that I didn’t say that they prove your code is doing what it is supposed to. In the end tests are still written by a development team, and the team can still misunderstand what is actually required. They aren’t some magical silver bullet that solves all of your problems, they are just another tool in the tool box, albeit a particularly useful one.

Be careful when writing your tests. Its very easily to write tests that actually end up making your code less able to respond to change. It can be very disheartening to go to change the signature of a constructor and to hit hundreds of compiler errors because someone helpfully wrote 349 tests that all use the constructor directly. I’ve written about this specific issue before, but in more general terms you need to be very careful about writing tests that hurt your codebase instead of helping it.

I’m going to assume that you are writing tests. If not, you’re probably doing it wrong. Unit tests are a good place to start for most developers, and I recommend The Art of Unit Testing by Roy Osherove.

I like to classify my tests into 3 categories. Unit, Integration and Functional.

Unit

Unit tests are isolationist, kind of like a paranoid survivalist. They don’t rely on anyone or anything, only themselves. They should be able to be run without instantiating any class but themselves, and should be very fast. They tend to exercise specific pieces of functionality, often at a very low level, although they can also encompass verifying business logic. This is less likely though, as business logic typically involves multiple classes working together to accomplish a higher level goal.

Unit tests are the lowest value tests for verifying that your piece of software works from an end-users point of view, purely because of their isolationist stance. Its easily plausible to have an entire suite of hundreds of unit tests passing and still have a completely broken application (its unlikely though).

Their true value comes from their speed and their specificity.

Typically I run my unit tests all the time, as part of a CI (Continuous Integration) environment, which is only possible if they run quickly, to tighten the feedback loop. Additionally, if a unit test fails, the failure should be specific enough that it is obvious why the failure occurred (and where it occurred).

I like to write my unit tests in the Visual Studio testing framework, augmented by FluentAssertions (to make assertions clearer), NSubstitute (for mocking purposes) and Ninject (to avoid creating a hard dependency on constructors, as previously described).

Integration

Integration tests involve multiple components working in tandem.

Typically I write integration tests to run at a level just below the User Interface and make them purely programmatic. They should walk through a typical user interaction, focusing on accomplishing some goal, and then checking that the goal was appropriately accomplished (i.e. changes were made or whatnot).

I prefer integration tests to not have external dependencies (like databases) but sometimes that isn’t possible (you don’t want to mock an entire API for example) so its best if they operate in a fashion that isn’t reliant on external state.

This means that if you’re talking to an API for example, you should be creating, modifying and deleting appropriate records for your tests within the tests themselves. The same can be said for a database, create the bits you want, clean up after yourself.

Integration tests are great for indicating whether or not multiple components are working together as expected, and for verifying that at whatever programmable level you have introduced the user can accomplish their desired goals.

Often integration tests like I have described above are incredibly difficult to write on a system that does not already have them. This is because you need to accommodate the necessary programmability layer into the system design for the tests. This layer has to exist because historically programmatically executing most UI layers has proven to be problematic at best (and impossible at worst).

The downside is that they are typically much, much slower than unit tests, especially if they are dependent on external resources. You wouldn’t want to run them as part of your CI, but you definitely want to run them regularly (at least nightly, but I like midday and midnight) and before every release candidate.

I like to write my Integration tests in the same testing framework as my unit tests, still using FluentAssertions and Ninject, with as little usage of NSubstitute as possible.

Functional

Functional tests are very much like integration tests but they habe one key difference, they execute on top of whatever layer the user typically interacts with. Whether that is some user interface framework (WinForms, WPF) or a programmatically accessible API (like ASP.NET Web API), the tests focus on automating normal user actions as the user would typically perform them, with the assistance of some automation framework.

I’ll be honest, I’ve had the least luck with implementing these sorts of tests, because the technologies that I’ve personally used the most (CodedUI) have proven to be extremely unreliable. Functional tests written on top of a public facing programmable layer (like an API) I’ve had a lot more luck with, unsurprisingly.

The worst outcome of a set of tests are regular, unpredictable failures that have no bearing on whether or not the application is actually working from the point of view of the user. Changing the names of things or just text displayed on the screen can lead to all sorts of failures in automated functional tests. You have to be very careful to use automation friendly meta information (like automation IDs) and to make sure that those pieces of information don’t change without good reason.

Finally, managing automated functional tests can be a chore, as they are often quite complicated. You need to manage this code (and it is code, so it needs to be treated like a first class citizen) as well, if not better than your actual application code. Probably better, because if you let it atrophy, it will very quickly become useless.

Regardless, functional tests can provide some amount of confidence that your application is actually working and can be used. Once implemented (and maintained) they are far more repeatable than someone performing a set of steps manually.

Don’t think that I think manual testers are not useful in a software development team. Quite the contrary. I think that they should be spending their time and applying their experience to more worthwhile problems, like exploratory testing as opposed to simply being robots following a script. That's why we have computers after all.

I have in the past used CodedUI to write functional tests for desktop applications, but I can’t recommend it. I’ve very recently started using TestComplete, and it seems to be quite good. I’ve heard good things about Selenium, but have never used it myself.

Naming

Your tests should be named clearly. The name should communicate the situation and the expected outcome.

For unit tests I like to use the following convention:

[CLASS_NAME]_[CLASS_COMPONENT]_[DESCRIPTION_OF_TEST]

An example of this would be:

DefaultConfigureUsersViewModel_RegisterUserCommand_WhenNewRegisteredUsernameIsEmptyCommandIsDisabled

I like to use the class name and class component so that you can easily see exactly where the test is. This is important when you are viewing test results in an environment that doesn't support grouping or sorting (like in the text output from your tests on a build server or in an email or something).

The description should be easily readable, and should confer to the reader an indication of the situation (When X) and the expected outcome.

For integration tests I tend to use the following convention:

I_[FEATURE]_[DESCRIPTION_OF_TEST]

An example of this would be:

I_UserManagement_EndUserCanEnterTheDetailsOfAUserOfTheSystemAndRegisterThemForUseInTheRestOfTheApplication

As I tend to write my integration tests using the same test framework as the unit tests, the prefix is handy to tell them apart at a glance.

Functional tests are very similar to integration tests, but as they tend to be written in a different framework the prefix isn't necessary. As long as they have a good, clear description.

There are other things you can do to classify tests, including using the [TestCategory] attribute (in MSTest at least), but I find good naming to be more useful than anything else.

Organisation

My experience is mostly relegated to C# and the .NET framework (with bits and pieces of other things), so when I speak of organisation, I’m talking primarily about solution/project structures in Visual Studio.

I like to break my tests into at least 3 different projects.

[COMPONENT].Tests
[COMPONENT].Tests.Unit
[COMPONENT].Tests.Integration

The root tests project is to contain any common test utilities or other helpers that are used by the other two projects, which should be self explanatory.

Functional tests tend to be written in a different frameowkr/IDE altogether, but if you’re using the same language/IDE, the naming convention to follow for the functional tests should be obvious.

Within the projects its important to name your test classes to match up with your actual classes, at least for unit tests. Each unit test class should be named the same as the actual class being tested, with a suffix of UnitTests. I like to do a similar thing with IntegrationTests, except the name of the class is replaced with the name of the feature (i.e. UserManagementIntegrationTests). I find that a lot of the time integration tests tend to

Tying it All Together

Testing of one of the most powerful tools in your arsenal, having a major impact on the quality of your code. And yet, I find that people don’t tend to give it a lot of thought.

The artefacts created for testing should be treated with the same amount of care and thoughtfulness as the code that is being tested. This includes things like having a clear understanding of the purpose and classification of a test, naming and structure/position.

I know that most of the above seems a little pedantic, but I think that having a clear convention to follow is important so that developers can focus their creative energies on the important things, like solving problems specific to your domain. If you know where to put something and approximately what it looks like, you reduce the cognitive load in writing tests, which in turn makes them easier to write.

I like it when things get easier.

1 Comments

Update: I ran into an issue with the script used in this post to do the signing when using an SHA-256 certificate (i.e. a newer one). You wrote another post describing the issue and solution here.

God I hate certificates.

Everything involving them always seems to be painful. Then you finally get the certificate thing done after hours of blood, sweat and pain, put it behind you, and some period of time later, the certificate expires and it all happens again. Of course, you’ve forgotten how you dealt with it the first time.

I’ve blogged before about the build/publish script I made for a ClickOnce WPF application, but I neglected to mention that there was a certificate involved.

Signing is important when distributing an application through ClickOnce, as without a signed installer, whenever anyone tries to install your application they will get warnings. Warnings like this one.

setup.exe is not commonly downloaded and could harm your computer

For a commercial application, that’s a terrible experience. Nobody will want to install a piece of software when their screen is telling them that “the author of the software is unknown”. And its red! Red bad. Earlier versions of Internet Explorer weren’t quite as hostile, but starting in IE9 (I think) the warning dialog was made significantly stronger. Its hard to even find the button to override the warning and just install the damn thing (Options –> More Options –> Run Anyway, which is a really out of the way).

As far as I can tell, all ClickOnce applications have a setup.exe file. I have no idea if you can customise this, but its essentially just a bootstrapper for the .application file which does some addition checks (like .NET Framework version).

Anyway, the appropriate way to deal with the above issue is by signing the ClickOnce manifests.

You need to use an Authenticode Code Signing Certificate, from a trusted Certificate Authority. These can range in price from $100 US to $500+ US. Honestly, I don’t understand the difference. For this project, we picked up one from Thawte for reasons I can no longer remember.

There’s slightly more to the whole signing process than just having the appropriate Certificate and Signing the installer. Even with a fully signed installer, Internet Explorer (via SmartScreen) will still give a warning to your users when they try to install, saying that “this application is not commonly downloaded”. The only way around this is to build up reputation with SmartScreen, and the only way to do that is slowly, over time, as more and more people download your installer. The kicker here is that without a certificate the reputation is tied to the specific installer, so if you ever make a new installer (like for a version update) all that built up reputation will go away. If you signed it however, the reputation accrues on the Certificate instead.

Its all very convoluted.

Enough time has passed between now and when I bought and setup the certificate for me to have completely forgotten how I went about it. I remember it being an extremely painful process. I vaguely recall having to generate a CSR (Certificate Signing Request), but I did it from Windows 7 first accidentally, and you can’t easily get the certificate out if you do that, so I had to redo the whole process on Windows Server 2012 R2. Thawte took ages to process the order as well, getting stuck on parts of the certification process a number of times.

Once I exported the certificate (securing the private key with a password) it was easy to incorporate it into the actual publish process though. Straightforward configuration option inside the Project properties, under Signing. The warning went from red (bad) to orange (okayish). This actually gives the end-user a Run button, instead of strongly recommending to just not run this thing. We also started gaining reputation against our Certificate, so that one day it would eventually be green (yay!).

Do you want to run or save setup.exe from

Last week, someone tried to install the application on Windows 8, and it all went to hell again.

I incorrectly assumed that once installed, the application would be trusted, which was true in Windows 7. This is definitely not the case in Windows 8.

Because the actual executable was not signed, the user got to see the following wonderful screen immediately after successfully installing the application (when it tries to automatically start).

windows protected your PC

Its the same sort of thing as what happens when you run the installer, except it takes over the whole screen to try and get the message across. The Run Anyway command is not quite as hidden (click on More Info) but still not immediately apparent.

The root cause of the problem was obvious (I just hadn’t signed the executable), but fixing it took me at least a day of effort, which is a day of my life I will never get back. That I had to spend in Certificate land. Again.

First Stab

At first I thought I would just be able to get away with signing the assembly. I mean, that option is directly below the configuration option for signing the ClickOnce manifests, so they must be related, right?

I still don’t know, because I spent the next 4 hours attempting to use my current Authenticode Code Signing Certificate as the strong name key file for signing the assembly.

I got an extremely useful error message.

Error during Import of the Keyset: Object already exists.

After a bit of digging it turns out that if you did not use KeySpec=2 (AT_SIGNATURE) during enrollment (i.e. when generating the CSR) you can’t use the resulting certificate for strong naming inside Visual Studio. I tried a number of things, including re-exporting, deleting and then importing the Certificate trying to force AT_SIGNATURE to be on, but I did not have any luck at all. Thawte support was helpful, but in the end unable to do anything about it.

Second Stab

Okay, what about signing the actual executable? Surely I can use my Authenticode Code Signing Certificate to sign the damn executable.

You can sign an executable (not just executables, other stuff too) using the CodeSign tool, which is included in one of the Windows SDKs. I stole mine from “C:\Program Files (x86)\Microsoft SDKs\Windows\v7.1A\bin”. The nice thing is that its an (entirely?) standalone application, so you can include it in your repository in the tools directory so that builds/publish

Of course, because I’m publishing the application through ClickOnce its not just as simple as “sign the executable”. ClickOnce uses the hash of files included in the install in the generation of its .manifest file, so if you sign the executable after ClickOnce has published to a local directory (before pushing it to the remote location, like I was doing) it changes the hash of the file and the .manifest is no longer valid.

With my newfound Powershell skills (and some help from this awesome StackOverflow post), I wrote the following script.

param
(
    $certificatesDirectory,
    $workingDirectory,
    $certPassword
)

if ([string]::IsNullOrEmpty($certificatesDirectory))
{
    write-error "The supplied certificates directory is empty. Terminating."
    exit 1
}

if ([string]::IsNullOrEmpty($workingDirectory))
{
    write-error "The supplied working directory is empty. Terminating."
    exit 1
}

if ([string]::IsNullOrEmpty($certPassword))
{
    write-error "The supplied certificate password is empty. Terminating."
    exit 1
}

write-output "The root directory of all files to be deployed is [$workingDirectory]."

$appFilesDirectoryPath = Convert-Path "$workingDirectory\Application Files\[PUBLISH DIRECTORY ROOT NAME]_*\"

write-output "The application manifest and all other application files are located in [$appFilesDirectoryPath]."

if ([string]::IsNullOrEmpty($appFilesDirectoryPath))
{
    write-error "Application Files directory is empty. Terminating."
    exit 1
}

#Need to resign the application manifest, but before we do we need to rename all the files back to their original names (remove .deploy)
Get-ChildItem "$appFilesDirectoryPath\*.deploy" -Recurse | Rename-Item -NewName { $_.Name -replace '\.deploy','' }

$certFilePath = "$certificatesDirectory\[CERTIFICATE FILE NAME]"

write-output "All code signing will be accomplished using the certificate at [$certFilePath]."

$appManifestPath = "$appFilesDirectoryPath\[MANIFEST FILE NAME]"
$appPath = "$workingDirectory\[APPLICATION FILE NAME]"
$timestampServerUrl = "http://timestamp.globalsign.com/scripts/timstamp.dll"

& tools\signtool.exe sign /f "$certFilePath" /p "$certPassword" -t $timestampServerUrl "$appFilesDirectoryPath\[EXECUTABLE FILE NAME]"
if($LASTEXITCODE -ne 0)
{
    write-error "Signing Failure"
    exit 1
}

# mage -update sets the publisher to the application name (overwriting any previous setting)
# We could hardcode it here, but its more robust if we get it from the manifest before we
# mess with it.
[xml] $xml = Get-Content $appPath
$ns = New-Object System.Xml.XmlNamespaceManager($xml.NameTable)
$ns.AddNamespace("asmv1", "urn:schemas-microsoft-com:asm.v1")
$ns.AddNamespace("asmv2", "urn:schemas-microsoft-com:asm.v2")
$publisher = $xml.SelectSingleNode('//asmv1:assembly/asmv1:description/@asmv2:publisher', $ns).Value
write-host "Publisher extracted from current .application file is [$publisher]."

# It would be nice to check the results from mage.exe for errors, but it doesn't seem to return error codes :(
& tools\mage.exe -update $appManifestPath -certFile "$certFilePath" -password "$certPassword"
& tools\mage.exe -update $appPath -certFile "$certFilePath" -password "$certPassword" -appManifest "$appManifestPath" -pub $publisher -ti $timestampServerUrl

#Rename files back to the .deploy extension, skipping the files that shouldn't be renamed
Get-ChildItem -Path "$appFilesDirectoryPath"  -Recurse | Where-Object {!$_.PSIsContainer -and $_.Name -notlike "*.manifest"} | Rename-Item -NewName {$_.Name + ".deploy"}

Its not the most fantastic thing I’ve ever written, but it gets the job done. Note that the password for the certificate is supplied to the script as a parameter (don’t include passwords in scripts, that’s just stupid). Also note that I’ve replaced some paths/names with tokens in all caps (like [PUBLISH DIRECTORY ROOT NAME]) to protect the innocent.

The meat of the script does the following:

  • Locates the publish directory (which will have a name like [PROJECT NAME]_[VERSION]).
  • Removes all of the .deploy suffixes from the files in the publish directory. ClickOnce appends .deploy to all files that are going to be deployed. I do not actually know why.
  • Signs the executable.
  • Extracts the current publisher from the manifest.
  • Updates the .manifest file.
  • Updates the .application file.
  • Restores the previously removed .deploy suffix.

You may be curious as to why the publisher is extracted from the .manifest file and then re-supplied. This is because if you update a .manifest file and you don’t specify a publisher, it overwrites whatever publisher was there before with the application name. Obviously, this is bad.

Anyway, the signing script is called after a build/publish but before the copy to the remote location in the publish script for the application.

Conclusion

After signing the executable and ClickOnce manifest, Windows 8 no longer complains about the application, and the installation process is much more pleasant. Still not green, but getting closer.

I really do hate every time I have to interact with a certificate though. Its always complicated, complex and confusing and leaves me frustrated and annoyed at the whole thing. Every time I learn just enough to get through the problem, but I never feel like I understand the intricacies enough to really be able to do this sort of thing with confidence.

Its one of those times in software development where I feel like the whole process is too complicated, even for a developer. It doesn’t help that not only is the technical process of using a certificate complicated, but even buying one is terrible, with arbitrary price differences (why are they different?) and terrible processes that you have to go through to even get a certificate.

At least this time I have a blog, and I’ve written this down so I can find it when it all happens again and I’ve purged all the bad memories from my head.

0 Comments

I know just enough of the windows scripting language (i.e. batch files) to get by. I’ve written a few scripts using it (at least one of which I’ve already blogged about), but I assume there is a world of deeper understanding and expertise there that I just haven’t fathomed yet. I’m sure its super powerful, but it just feels so…archaic.

Typically what happens is that I will find myself with the need to automate something, and two options will come to mind:

  1. I could write a batch file. This is always a struggle, because I write C# code all day, and going back to the limited functionality of the windows scripting language is painful. I know that if I want to do something past a certain point of complexity, I’ll need to combine this approach with something else.
  2. I could write a C# application of some sort (probably console) that does the automation. C# is where most of my skills lie, but it seems like I’m overcomplicating a situation when I build an entire C# console application. Additionally, if I’m automating something, I want it to be as simple and readable as possible for the next person (which may be me in 3 months) and encapsulating a bunch of automation logic into a C# application is not immediately discoverable.

I usually go with option 1 (batch file), which to me, is the lesser of two evils.

Enter Powershell

I’ve always secretly known that there are more than 2 options, probably many more.

At the very least there is a 3rd option:

  1. Use Powershell. Its just like a batch file, except completely different. You can leverage the .NET framework, and you get a lot more useful built in commands.

For me, the downside of using Powershell has always been the learning curve.

Every time I go to solve a problem that Powershell might be useful for, I can never justify spending the time to learn Powershell, even just the minimum needed to get the job done.

I finally got past that particular mental block recently when I wanted to write an automated build script that did the following:

  1. Update a version.
  2. Commit changes to git.
  3. Build a C# library project.
  4. Package the library into a NuGet package.
  5. Track the package in git for later retrieval.

There’s a lot of complexity hidden in those 5 statements, enough such that I knew I wouldn’t be able to accomplish it using just the vanilla windows scripting language. I resolved that this was definitelynot something that should be hidden inside the source code of an application, so it was time to finally go nuclear and learn how to do the Powershell.

Getting Started

The last time I tried to use Powershell was a…while ago. Long enough such that it wasn’t guaranteed that a particular computer would have Powershell installed on it. That’s pretty much not true anymore, so you can just run the “powershell” command from the command line to enter the Powershell repl. Typing “exit” leaves Powershell and returns you back to your command prompt.

Using Powershell on the command line is all well and good for exploration, but how can I use it for scripting?

powershell -Executionpolicy remotesigned -File [FileName]

The –Executionpolicy flag makes it so that you can actually run the script file. By default Powershell has the Restricted policy set, meaning scripts will not run.

Anyway, seems straightforward enough, so without further ado, I’ll show you to the finished Powershell script to accomplish the above, and then go through it in more detail.

The Script

param ( [switch]$release ) $gitTest = git if($gitTest -match "'git' is not recognized as an internal or external command") { write-error "Cannot find Git in your path. You must have Git in your path for this package script to work." exit } # Check for dirty git index (i.e. uncommitted, unignored changes). $gitStatus = git status --porcelain if($gitStatus.Length -ne 0) { write-error "There are uncommitted changes in the working directory. Deal with them before you package, or the tag that's made in git as a part of a package will be incorrect." exit } $currentUtcDateTime = (get-date).ToUniversalTime() $assemblyInfoFilePath = "[PROJECT PATH]\Properties\AssemblyInfo.cs" $assemblyVersionRegex = "(\[assembly: AssemblyVersion\()(`")(.*)(`"\))" $assemblyInformationalVersionRegex = "(\[assembly: AssemblyInformationalVersion\()(`")(.*)(`"\))" $existingVersion = (select-string -Path $assemblyInfoFilePath -Pattern $assemblyVersionRegex).Matches[0].Groups[3] $existingVersion = new-object System.Version($existingVersion) "Current version is [" + $existingVersion + "]." $major = $existingVersion.Major $minor = $existingVersion.Minor $build = $currentUtcDateTime.ToString("yy") + $currentUtcDateTime.DayOfYear $revision = [int](([int]$currentUtcDateTime.Subtract($currentUtcDateTime.Date).TotalSeconds) / 2) $newVersion = [System.String]::Format("{0}.{1}.{2}.{3}", $major, $minor, $build, $revision) "New version is [" + $newVersion + "]." "Replacing AssemblyVersion in [" + $assemblyInfoFilePath + "] with new version." $replacement = '$1"' + $newVersion + "`$4" (get-content $assemblyInfoFilePath) | foreach-object {$_ -replace $assemblyVersionRegex, $replacement} | set-content $assemblyInfoFilePath if ($release.IsPresent) { $newInformationalVersion = $newVersion } else { write-host "Building prerelease version." $newInformationalVersion = [System.String]::Format("{0}.{1}.{2}.{3}-pre", $major, $minor, $build, $revision) } "Replacing AssemblyInformationalVersion in [" + $assemblyInfoFilePath + "] with new version." $informationalReplacement = '$1"' + $newInformationalVersion + "`$4" (get-content $assemblyInfoFilePath) | foreach-object {$_ -replace $assemblyInformationalVersionRegex, $informationalReplacement} | set-content $assemblyInfoFilePath "Committing changes to [" + $assemblyInfoFilePath + "]." git add $assemblyInfoFilePath git commit -m "SCRIPT: Updated version for release package." $msbuild = 'C:\Program Files (x86)\MSBuild\12.0\bin\msbuild.exe' $solutionFile = "[SOLUTION FILENAME]" .\tools\nuget.exe restore $solutionFile & $msbuild $solutionFile /t:rebuild /p:Configuration=Release if($LASTEXITCODE -ne 0) { write-host "Build FAILURE" -ForegroundColor Red exit } .\tools\nuget.exe pack [PATH TO PROJECT FILE] -Prop Configuration=Release -Symbols write-host "Creating git tag for package." git tag -a $newInformationalVersion -m "SCRIPT: NuGet Package Created."

[Wall of text] crits [reader] for [astronomical amount of damage].

To prevent people from having to remember to run Powershell with the correct arguments, I also created a small batch file that you can just run by itself to execute the script.

@ECHO OFF

powershell -Executionpolicy remotesigned -File _Package.ps1 %*

As you can see, the batch script is straightforward. All it does is call the Powershell script, passing in any arguments that were passed to the batch file (that’s the %* at the end of the line).

Usage is:

package // To build a prerelease, mid-development package.

package –release // To build a release package, intended to be uploaded to NuGet.org.

Parameters

The first statement at the top defines parameters to the script. In this case, there is only one parameter, and it defines whether or not the script should be run in release mode.

I wasn’t comfortable with automatically making every single package built using the script a release build, because it meant that if I automated the upload to NuGet.org at some later date, I wouldn’t be able to create a bunch of different builds during development without potentially impacting on people actually using the library (they would see new versions available and might update, which would leave me having to support every single package I made, even the ones I was doing mid-development). That’s less than ideal.

The release flag determines whether or not the AssemblyInformationalVersion has –pre appended to the end of the version string. NuGet uses the AssemblyInformationalVersion in order to define whether or not the package is a prerelease build, which isolates it from the normal stream of packages.

Checks

Because the script is dependent on a couple of external tools that could not be easily included in the repository (git in particular, but also MSBuild) I wanted to make sure that it failed fast if those tools were not present.

I’ve only included a check for git because I assume that the person running the script has Visual Studio 2013 installed, whereas git needs to be in the current path in order for the script to do what it needs to do.

The other check that the script does is check to see whether or not there are any uncommitted changes.

I do this because one of the main purposes of this build script is to build a library and then mark the source control system so that the source code for that specific version can be retrieved easily. Without this check, someone could use the script with uncommitted local changes and the resulting tag would not actually represent the contents of the package. Super dangerous!

Versioning

In this particular case, versioning is (yet again) a huge chunk of the script, as it is intended to build a library for distribution.

The built in automatic versioning for .NET is actually pretty good. The problem is, I have never found a way to use that version easily from a build script and the version is never directly stated in the AssemblyInfo file, so you can’t see the version at a glance just by reading the code. I need more control than that.

The algorithm that the .NET framework uses is (partially) explained in the documentation for AssemblyVersion.

To summarise:

  1. The version is of the form [MAJOR].[MINOR].[BUILD].[REVISION].
  2. You can substitute a * for either (or both of) BUILD and REVISION.
  3. BUILD is automatically set to the number of days since 1 January 2000.
  4. REVISION is automatically set to the number of seconds since midnight / 2.

The algorithm I implemented in the script is a slight modification of that, where BUILD is instead set to YYDDD.

Anyway, Powershell makes the whole process of creating this new version much much easier than it would be a normal batch file, primarily because of the ability to use the types and functions in the .NET framework. Last time I tried to give myself more control over versioning I had to write a custom MSBuild task.

The script grabs the version currently in the AssemblyVersion attribute of the specified AssemblyInfo file using the select-string cmdlet. It extracts the MAJOR and MINOR numbers from the existing version (using the .NET Version class) and then creates a string containing the new version.

Finally, it uses a brute force replacement approach to jam the new version back into the AssemblyVersion attribute, using the same regular expression. I’ll be brutally honest, I don’t understand the intricacies of the way in which it does the replacement, just that it reads all of the lines from the file, modifies any that match the regular expression, then writes them all back, effectively overwriting the entire file. I wouldn’t recommend this approach for any serious replacement, but AssemblyInfo is a very small file, so it doesn’t matter all that much here.

Some gotchas here. Initially I broke the regular expression into 3 groups. Left of the version, the version and right of the version. However, when it came time to do the replace, I could not create a replacement string using the first capture group + the new version because the resulting string came out like this “$11.2.14309.2306”. When Powershell/.NET tried to substitute the capture groups in, it tried to substitute the $11 group, which didn’t exist. Simply adding whitespace would have broken the version in the file, so I had to break the regular expression into 3 groups, one of which is just the single double quotes to the left of the version. When it comes time to do the replacement, I just manually insert that quote and that worked. A bit nastier than I would like, but ah well.

The version update is then duplicated for the AssemblyInformationalVersion, with the previously mentioned release/prerelease changes.

Source Control

A simple git add and commit to ensure that the altered AssemblyInfo file is in source control, ready to be tagged after the build is complete. I prepended the commit message with “SCRIPT:” so that its easy to tell which commits were done automatically when looking at the git log output.

Build

Nothing fancy, just a normal MSBuild execution, preceded by a NuGet package restore.

I struggled with calling MSBuild correctly from the script for quite a while. For some reason Powershell just would not let me call it with the appropriate parameters. Eventually I stumbled onto this solution. & is simply the call operator.

The script checks that there weren’t any errors during the build, because there would be no point in going any further if there was. The $LASTEXITCODE variable is a handy little variable that tracks the last exit code from a call.

Packaging

Simple NuGet package command. I use –Symbols because I prefer NuGet packages that include source code, so that they are easier to debug. This is especially useful for a library.

Source Control (again)

If we got this far, we need to create a record that the package was created. A simple git tag showing the same version as the AssemblyInformationalVersion is sufficient.

Summary

As you can clearly see, the script is not perfect. Honestly, I’m not even sure if its good. At the very least it gets the job done. I’m sure as I continue to use it for its intended purpose I will come up with ways to improve it and make it clearer and easier to understand.

Regardless of that, Powershell is amazing! The last time I tried to solve this problem I wrote a custom MSBuild task to get the versioning done. That was a lot more effort than the versioning in this script, and much harder to maintain moving forward. The task was better structured though, so that’s definitely an area where this script could use some improvement. Maybe I can extract the versioning code out into a function? Another file maybe? I should almost certainly run the tests before packaging as well, no point in making a package where the library has errors that could have been picked up. Who knows, I’m sure I’ll come up with something.

You may also ask why I went to all this trouble when I should be using a build server of some description.

I agree, I should be using a build server. For the project that this build script was written for I don’t really have the time or resources to put one into place…yet.