0 Comments

In my last blog post, I mentioned the 3 classifications that I think tests fall into, Unit, Integration and Functional.

Of course, regardless of classification, all tests are only valuable if they are actually being executed. Its wonderful to say you have tests, but if you’re not running them all the time, and actually looking at the results, they are worthless. Worse than worthless if you think about it, because the presence of tests gives a false sense of security about your system.

Typically executing Unit Tests (and Integration Tests if they are using the same framework) is trivial, made vasty easier by having a build server. Its not that bad even if you don’t have a build server, because those sorts of tests can typically be run on a developers machine, without a huge amount of fanfare. The downside of not having a build server, is that the developers in question need to remember to run the tests. As creative people, following a checklist that includes “wait for tests to run” is sometimes not our strongest quality.

Note that I’m not saying developers should not be running tests on their own machines, because they definitely should be. I would usually limit this to Unit tests though, or very self-contained Integration tests. You need to be very careful about complicating the process of actually writing and committing code if you want to produce features and improvements in a reasonable amount of time. Its very helpful to encourage people to run the tests themselves regularly, but to also have a fallback position. Just in case.

Compared to running Unit and Integration tests, Functional tests are a different story. Regardless of your software, you’ll want to run your Functional tests in a controlled environment, and this usually involves spinning up virtual machines, installing software, configuring the software and so on. To get good test results, and to lower the risk that the results have been corrupted by previously test runs, you’ll want to use a clean environment each time you run the tests. Setting up and running the tests then becomes a time consuming and boring task, something that developers hate.

What happens when you give a developer a task to do that is time consuming and boring?

Automation happens.

Procedural Logic

Before you start doing anything, its helpful to have a high level overview of what you want to accomplish.

At a high level, the automated execution of functional tests needed to:

  • Set up a test environment.
    • Spin up a fresh virtual machine.
    • Install the software under test.
    • Configure software under test.
  • Execute the functional tests.
  • Report the results.

Fairly straightforward. As with everything related to software though, the devil is in the details.

For anyone who doesn’t want to listen to me blather, here is a link to a GitHub repository containing sanitized versions of the scripts. Note that the scripts were not complete at the time of this post, but will be completed later.

Now, on to the blather!

Automatic Weapons

In order to automate any of the above, I would need to select a scripting language.

It would need to be able to do just about anything (which is true of most scripting languages), but would also have to be able to allow me to remotely execute a script on a machine without having to log onto it or use the UI in any way.

I’ve been doing a lot of work with Powershell recently, mostly using it to automate build, package and publish processes. I’d hesitated to learn Powershell for a long time, because every time I encountered something that I thought would have been made easier by using Powershell, I realised I would have to spend a significant amount of time learning just the basics of Powershell before I could do anything useful. I finally bit the bullet and did just that, and its snowballed from there.

Powershell is the hammer and everything is a nail now.

Obviously being a well established scripting language and is installed on basically every modern version of Windows. Powerful by itself, it’s integration with the .NET framework allows a C# developer like me the power to fall back to the familiar .NET BCL for anything I can’t accomplish using just Powershell and its cmdlets. Finally, Powershell Remote Execution allows you to configure a machine and allow authenticated users to remotely execute scripts on it.

So, Powershell it was.

A little bit more about Powershell Remote Execution. It leverages the Windows Remoting Framework (WinRM), and once you’ve got all the bits and pieces setup on the target machine, is very easy to use.

A couple of things to be aware of with remote execution:

  1. By default the Windows Remoting Service is not enabled on some versions of Windows. Obviously this needs to be running.
  2. Powershell Remote Execution communicates over port 5985 (HTTP) and 5986 (HTTPS). Earlier versions used 80 and 443. These ports need to be configured in the Firewall on the machine in question.
  3. The user you are planning on using for the remote execution (and I highly suggest using a brand new user just for this purpose) needs to be a member of the [GROUP HERE] group.

Once you’ve sorted the things above, actually remotely executing a script can be accomplish using the Invoke-Command cmdlet, like so:

$pw = ConvertTo-SecureString '[REMOTE USER PASSWORD' -AsPlainText -Force
$cred = New-Object System.Management.Automation.PSCredential('[REMOTE USERNAME]', $pw)
$session = New-PSSession -ComputerName $ipaddress -Credential $cred 

write-host "Beginning remote execution on [$ipaddress]."

$testResult = Invoke-Command -Session $session -FilePath "$root\remote-download-files-and-run-functional-tests.ps1" -ArgumentList $awsKey, $awsSecret, $awsRegion, $awsBucket, $buildIdentifier

Notice that I don’t have to use a machine name at all. IP Addresses work fine in the ComputerName parameter. How do I know the IP address? That information is retrieved when starting the Amazon EC2 instance.

Environmental Concerns

In order to execute the functional tests, I wanted to be able to create a brand new, clean virtual machine without any human interaction. As I’ve stated previously, we primarily use Amazon EC2 for our virtualisation needs.

The creation of a virtual machine for functional testing would need to be done from another AWS EC2 instance, the one running the TeamCity build agent. The idea being that the build agent instance is responsible for building the software/installer, and would in turn farm out the execution of the functional tests to a completely different machine, to keep a good separation of concerns.

Amazon supplies two methods of interacting with AWS EC2 (Elastic Compute Cloud) via Powershell on a Windows machine.

The first is a set of cmdlets (Get-EC2Instance, New-EC2Instance, etc).

The second is the classes available in the .NET SDK for AWS.

The upside of running on an EC2 instance that was based off an Amazon supplied image is that both of those methods are already installed, so I didn’t have to mess around with any dependencies.

I ended up using a combination of both (cmdlets and .NET SDK objects) to get an instance up and running, mostly because the cmdlets didn’t expose all of the functionality that I needed.

There were 3 distinct parts to using Amazon EC2 for the test environment. Creation, Configuration and Waiting and Clean Up. All of these needed to be automated.

Creation

Obviously an instance needs to be created. The reason this part is split from the Configuration and Waiting is because I’m still not all that accomplished at error handling and returning values in Powershell. Originally I had creation and configuration/waiting in the same script, but if the call to New-EC2Instance returned successfully and then something else failed, I had a hard time returning the instance information in order to terminate it in the finally block of the wrapping script.

The full content of the creation script is available at create-new-ec2-instance.ps1. Its called from the main script (functional-tests.ps1).

Configuration and Waiting

Beyond the configuration done as part of creation, instances can be tagged to add additional information. Also, the script needs to wait on a number of important indicators to ensure that the instance is ready to be interacted with. It made sense to do these two things together for reasons.

The tags help to identify the instance (the name) and also mark the instance as being acceptable to be terminated as part of a scheduled cleanup script that runs over all of our EC2 instances in order to ensure we don’t run expensive instances longer than we expected to.

As for the waiting indicators, the first indicator is whether or not the instance is running. This is an easy one, as the state of the instance is very easy to get at. You can see the function below, but all it does is poll the instance every 5 seconds to check whether or not it has entered the desired state yet.

The second indicator is a bit harder to get at, but it actually much more important. EC2 instances can be configured with status checks, and one of those status checks is whether or not the instance is actually reachable. I’m honestly not sure if this is something that someone before me setup, or if it is standard on all EC2 instances, but its extremely useful.

Anyway, accessing this status check is a bit of a rabbit hole. You can see the function below, but it uses a similar approach to the running check. It polls some information about the instance every 5 seconds until it meets certain criteria. This is the one spot in the entire script that I had to use the .NET SDK classes, as I couldn’t find a way to get this information out of a cmdlet.

The full content of the configuration and wait script is available at tag-and-wait-for-ec2-instance.ps1, and is just called from the main script.

Clean Up

Since you don’t want to leave instances hanging around, burning money, the script needs to clean up after it was done.

Programmatically terminating an instance is quite easy, but I had a lot of issues around the robustness of the script itself, as I couldn’t quite grasp the correct path to ensure that a clean up was always run if an instance was successfully created. The solution to this was to split the creation and tag/wait into different scripts, to ensure that if creation finished it would always return identifying information about the instance for clean up.

Termination happens in the finally block of the main script (functional-tests.ps1).

Instant Machine

Of course all of the instance creation above is dependent on actually having an AMI (Amazon Machine Image) available that holds all of the baseline information about the instance to be created, as well as other things like VPC (Virtual Private Cloud, basically how the instance fits into a network) and security groups (for defining port accessibility). I’d already gone through this process last time I was playing with EC2 instances, so it was just a matter of identifying the various bits and pieces that needs to be done on the machine in order to make it work, while keeping it as clean as possible in order to get good test results.

I went through the image creation process a lot as I evolved the automation script. One thing I found to be useful was to create a change log for the machine in question (I used a page in Confluence) and to version any images made. This helped me to keep the whole process repeatable, as well as documenting the requirements of a machine able to perform the functional tests.

To Be Continued

I think that’s probably enough for now, so next time I’ll continue and explain about automating the installation of the software under test and then actually running the tests and reporting the results.

Until next time!