0 Comments

You might have noticed a pattern in my recent posts. They’re all about build scripts, automation, AWS and other related things. It seems that I have fallen into a dev-ops role. Not officially, but it’s basically all I’ve been doing for the past few months.

I’m not entirely sure how it happened. A little automation here, a little scripting there. I see an unreliable manual process and I want to automate it to make it reproducible.

The weird thing is, I don’t really mind. I’m still solving problems, just different ones. It feels a little strange, but its nice to have your client/end-user be a technical person (i.e. a fellow programmer) instead of the usual business person with only a modicum of technical ability.

I’m not sure how my employer feels about it, but they must be okay with it, or surely someone would have pulled me aside and asked some tough questions. I’m very vocal about what I’m working on and why, so its not like I’m just quietly doing the wrong work in the background without making a peep.

Taking into account the above comments, its unsurprising then that this blog post will continue on in the same vein as the last ones.

Walking Skeletons are Scary

As I mentioned at the end of my previous post, we’ve started to develop an web API to replace a database that was being directly accessed from a mobile application. We’re hoping this will tie us less to the specific database used, and allow us some more control over performance, monitoring, logging and other similar things.

Replacing the database is something that we want to do incrementally though, as we can’t afford to develop the API all at once and the just drop it in. That’s not smart, it just leads to issues with the integration at the end.

No, we want to replace the direct database access bit by bit, giving us time to adapt to any issues that we encounter.

In Growing Object Oriented Software Guided By Tests, the authors refer to the concept of a walking skeleton. A walking skeleton is when you develop the smallest piece of functionality possibly, and focus on sorting out the entire delivery chain in order to allow that piece of functionality to be repeatably built and deploy, end-to-end, without human interaction. This differs from the approach I’ve commonly witnessed, where teams focus on getting the functionality together and then deal with the delivery closer to the “end”, often leading to integration issues and other unforeseen problems things, like certificates!

Its always certificates.

The name comes from the fact that you focus on getting the framework up and running (the bones) and then flesh it out incrementally (more features and functionality).

Our goal was to be able to reliably and automatically publish the latest build of the API to an environment dedicated to continuous integration. A developer would push some commits to a specified branch (master) in BitBucket and it would be automatically built, packaged and published to the appropriate environment, ready for someone to demo or test, all without human interaction.

A Pack of Tools

Breaking the problem down we identified 4 main chunks of work. Automatically build, package up application for deployment, actually deploy (and track versions deployed, so some form of release management) and then the setup of the actual environment that would be receiving the deployment.

The build problem is already solved, as we use TeamCity. The only difference from some of our other TeamCity builds, would be that the entire build process would be encapsulated in a Powershell script, so that we can control it in Version Control and run it separately from TeamCity if necessary. I love what TeamCity is capable of, but I’m always uncomfortable when there is so much logic about the build process separate from the actual source. I much prefer to put it all in the one place, aiming towards the ideal of “git clone, build” and it just works.

We can use the same tool for both packaging and deployment, Octopus Deploy. Originally we were going to use NuGet packages to contain our application (created via NuGet.exe), but we’ve since found that its much better to use Octopack to create the package, as it structures the internals in a way that makes it easy for Octopus Deploy to deal with it.

Lastly we needed an environment that we could deploy to using Octopus, and this is where the meat of my work over the last week and a bit actually occurs.

I’ve setup environments before, but I’ve always been uncomfortable with the manual process by which the setup usually occurs. You might provision a machine (virtual if you are lucky) and then spend a few hours manually installing and tweaking the various dependencies on it so your application works as you expect. Nobody ever documents all the things that they did to make it work, so you have this machine (or set of machines) that lives in this limbo state, where no-one is really sure how it works, just that it does. Mostly. God help you if you want to create another environment for testing or if the machine that was so carefully configured burns down.

This time I wanted to do it properly. I wanted to be able to, with the execution of a single script, create an entire environment for the API, from scratch. The environment would be regularly torn down and rebuilt, to ensure that we can always create it from scratch and we know exactly how it has been configured (as described in the script). A big ask, but more than possible with some of the tools available today.

Enter Amazon CloudFormation.

Cloud Pun

Amazon is a no brainer at this point for us. Its where our experience as an organisation lies and its what I’ve been doing a lot of recently. There are obviously other cloud offerings out there (hi Azure!), but its better to stick with what you know unless you have a pressing reason to try something different.

CloudFormation is another service offered by Amazon (like EC2 and S3), allowing you to leverage template files written in JSON that describe in detail the components of your environment and how its disparate pieces are connected. Its amazing and I wish I had known about it earlier.

In retrospect, I’m kind of glad I didn't know about it earlier, as using the EC2 and S3 services directly (and all the bits and pieces that they interact with) I have gained enough understanding of the basic components to know how to fit them together in a template effectively. If I had of started with CloudFormation I probably would have been overwhelmed. It was overwhelming enough with the knowledge that I did have, I can’t imagine what it would be like to hit CloudFormation from nothing.

Each CloudFormation template consists of some set of parameters (names, credentials, whatever), a set of resources and some outputs. Each resource can refer to other resources as necessary (like an EC2 instance referring to a Security Group) and you can setup dependencies between resources as well (like A must complete provisioning before B can start). The outputs are typically something that you want at the end of the environment setup, like a URL for a service or something similar.

I won’t go into detail about the template that I created (its somewhat large), but I will highlight some of the important pieces that I needed to get working in order for the environment to fit into our walking skeleton. I imagine that the template will need to be tweaked and improved as we progress through developing the API, but that's how incremental development works. For now its simple enough, a Load Balancer, Auto Scaling Group and a machine definition for the instances in the Auto Scaling Group (along with some supporting resources, like security groups and wait handles).

This Cloud Comes in Multiple Parts

This blog post is already 1300+ words, so it’s probably a good idea to cut it in two. I have a habit of writing posts that are too long, so this is my attempt to get that under control.

Next time I’ll talk about Powershell Desired State Configuration, deploying dependencies to be accessed by instances during startup, automating deployments with Octopus and many other wondrous things that I still don’t quite fully understand.