0 Comments

So, we decided to automate the execution of our Functional tests. After all, tests that aren't being run are effectively worthless.

In Part 1, I gave a little background, mentioned the scripting language we were going to use (Powershell), talked about programmatically spinning up the appropriate virtual machine in AWS for testing purposes and then how to communicate with it.

This time I will be talking about automating the installation of the software under test, running the actual tests and then reporting the results.

Just like last time, here is a link to the GitHub repository with sanitized versions of all the scripts (which have been updated significantly), so you don't have to piece them together from the snippets I’m posting throughout this post.

Installed into Power

Now that I could create an appropriate virtual machine as necessary and execute arbitrary code on it remotely, I needed to install the actual software under test.

First I needed to get the installer onto the machine though, as we do not make all of our build artefacts publically available on every single build (so no just downloading it from the internet). Essentially I just needed a mechanism to transfer files from one machine to another. Something that wouldn’t be too difficult to setup and maintain.

I tossed up a number of options:

  • FTP. I could setup an FTP server on the virtual machine and transfer the files that way. The downside of this is that I would have to setup an FTP server on the virtual machine, and make sure it was secure; and configured correctly. I haven’t setup a lot of FTP servers before, so I decided not to do this one.
  • SSH + File Transfer. Similar to the FTP option, I could install an SSH server on the virtual machine and then use something like SCP to securely copy the files to the machine. This would have been the easiest option if the machine was Linux based, but being a Windows machine it was more effort than it was worth.
  • Use an intermediary location, like an Amazon S3 bucket. This is the option I ended up going with.

Programmatically copying files to an Amazon S3 bucket using Powershell is fairly straightforward, although I did run into two issues.

Folders? What Folders?

Even though its common for GUI tools that sit on top of Amazon S3 to present the information as a familiar folder/file directory structure, that is entirely not how it actually works. In fact, thinking of the information that you put in S3 in that way will just get you into trouble.

Instead, its much more accurate to think of the information you upload to S3 to be key/value pairs, where the key tends to look like a fully qualified file path.

I made an interesting error at one point and uploaded 3 things to S3 with the following keys, X, X\Y and X\Z. The S3 website interpreted the X as a folder, which meant that I was no longer able to access the file that I had actually stored at X, at least through the GUI anyway. This is one example of why thinking about S3 as folders/files can get you in trouble.

Actually uploading files to S3 using Powershell is easy enough. Amazon supply a set of cmdlets that allow you to interact with S3, and those cmdlets are pre-installed on machines originally created using an Amazon supplied AMI.

With regards to credentials, you can choose to store the credentials in configuration, allowing you to avoid having to enter them for every call, or you can supply then whenever you call the cmdlets. Because this was a script, I chose to supply them on each call, so that the script would be self contained. I’m not a fan of global settings in general, they make me uncomfortable. I feel that it makes the code harder to understand, and in this case, it would have obfuscated how the cmdlets were authenticating to the service.

The function that uploads things to S3 is as follows:

function UploadFileToS3
{
    param
    (
        [string]$awsKey,
        [string]$awsSecret,
        [string]$awsRegion,
        [string]$awsBucket,
        [System.IO.FileInfo]$file,
        [string]$S3FileKey
    )

    write-host "Uploading [$($file.FullName)] to [$($awsRegion):$($awsBucket):$S3FileKey]."
    Write-S3Object -BucketName $awsBucket -Key $S3FileKey -File "$($file.FullName)" -Region $awsRegion -AccessKey $awsKey -SecretKey $awsSecret

    return $S3FileKey
}

The function that downloads things is:

function DownloadFileFromS3
{
    param
    (
        [string]$awsKey,
        [string]$awsSecret,
        [string]$awsRegion,
        [string]$awsBucket,
        [string]$S3FileKey,
        [string]$destinationPath
    )

    $destinationFile = new-object System.IO.FileInfo($destinationPath)
    if ($destinationFile.Exists)
    {
        write-host "Destination for S3 download of [$S3FileKey] ([$($destinationFile.FullName)]) already exists. Deleting."
        $destinationFile.Delete()
    }

    write-host "Downloading [$($awsRegion):$($awsBucket):$S3FileKey] to [$($destinationFile.FullName)]."
    Read-S3Object -BucketName $awsBucket -Key $S3FileKey -File "$($destinationFile.FullName)" -Region $awsRegion -AccessKey $awsKey -SecretKey $awsSecret | write-host

    $destinationFile.Refresh()

    return $destinationFile
}

Once the installer was downloaded on the virtual machine, it was straightforward to install it silently.

if (!$installerFile.Exists)
{
    throw "The Installer was supposed to be located at [$($installerFile.FullName)] but could not be found."
}

write-host "Installing Application (silently) from the installer [$($installerFile.FullName)]"
# Piping the results of the installer to the output stream forces it to wait until its done before continuing on
# with the remainder of the script. No useful output comes out of it anyway, all we really care about
# is the return code.
& "$($installerFile.FullName)" /exenoui /qn /norestart | write-host
if ($LASTEXITCODE -ne 0)
{
    throw "Failed to Install Application."
}

We use Advanced Installer, which in turn uses an MSI, so you’ll notice that there are actually a number of switches being used above to get the whole thing to install without human interaction. Also note the the piping to write-host, which ensures that Powershell actually waits for the installer process to finish, instead of just starting it and then continuing on its merry way. I would have piped to write-output, but then the uncontrolled information from the installer would go to the output stream and mess up my return value.

Permissions Denied

God. Damn. Permissions.

I had so much trouble with permissions on the files that I uploaded to S3. I actually had a point where I was able to upload files, but I couldn’t download them using the same credentials that I used to upload them! That makes no goddamn sense.

To frame the situation somewhat, I created a user within our AWS account specifically for all of the scripted interactions with the service. I then created a bucket to contain the temporary files that are uploaded and downloaded as part of the functional test execution.

The way that S3 defines permissions on buckets is a little strange in my opinion. I would expect to be able to define permissions on a per user basis for the bucket. Like, user X can read from and write to this bucket, user Y can only read, etc. I would also expect files that are uploaded to this bucket to then inherit those permissions, like a folder, unless I went out of my way to change them. That’s the mental model of permissions that I have, likely as a result of using Windows for many years.

This is not how it works.

Yes you can define permissions on a bucket, but not for users within your AWS account. It doesn’t even seem to be able to define permissions for other specific AWS accounts either. There’s a number of groups available, one of which is Authenticated Users, which I originally set with the understanding that it would give authenticated users belonging to my AWS account the permissions I specified. Plot twist, Authenticated Users means any AWS user. Ever. As long as they are authenticating. Obviously not what I wanted. At least I could upload files though (but not download them).

Permissions are not inherited when set through the simple options I mentioned above, so any file I uploaded had no permissions set on it.

The only way to set permissions with the granularity necessary and to have them inherited automatically is to use a Bucket Policy.

Setting up a Bucket Policy is not straight forward, at least to a novice like myself.

After some wrangling with the helper page, and some reading of the documentation, here is the bucket policy I ended up using, with details obscured to protect the guilty.

{
    "Version": "2008-10-17",
    "Id": "Policy1417656003414",
    "Statement": [
        {
            "Sid": "Stmt1417656002326",
            "Effect": "Allow",
            "Principal": {
                "AWS": "arn:aws:iam::[USER IDENTIFIER]"
            },
            "Action": [
                "s3:DeleteObject",
                "s3:GetObject",
                "s3:PutObject"
            ],
            "Resource": "arn:aws:s3:::[BUCKET NAME]/*"
        },
        {
            "Sid": "Stmt14176560023267",
            "Effect": "Allow",
            "Principal": {
                "AWS": "arn:aws:iam::[USER IDENTIFIER]"
            },
            "Action": "s3:ListBucket",
            "Resource": "arn:aws:s3:::[BUCKET NAME]"
        }
    ]
}

The policy actually reads okay once you have it, but I’ll be honest, I still don’t quite understand it. I know that I’ve specifically given the listed permissions on the contents of the bucket to my user, and also given List permissions on the bucket itself. This allowed me to upload and download files with the user I created, which is what I wanted. I’ll probably never need to touch Bucket Policies again, but if I do, I’ll make more of an effort to understand them.

Testing My Patience

Just like the installer, before I can run the tests I need to actually have the tests to run.

We currently use TestComplete as our Functional test framework. I honestly haven’t looked into TestComplete all that much, apart from just running the tests, but it seems to be solid enough. TestComplete stores your functional tests in a structure similar to a Visual Studio project, with a project file and a number of files under that that define the actual tests and the order to run them in.

For us, our functional tests are stored in the same Git repository as our code. We use feature branches, so it makes sense to run the functional tests that line up with the code that the build was made from. The Build Agent that builds the installer has access to the source (obviously), so its a fairly simple matter to just zip up the definitions and any dependent files, and push that archive to S3 in the exact same manner as the installer, ready for the virtual machine to download.

Actually running the tests though? That was a bastard.

As I mentioned in Part 1, after a virtual machine is spun up as part of this testing process, I use Powershell remote execution to push a script to the machine for execution.

As long as you don’t want to do anything with a user interface, this works fantastically.

Functional tests being based entirely around interaction with the user interface therefore prove problematic.

The Powershell remote session that is created when executing the script remotely does not have any ability for user-interactivity, and cannot to my knowledge. Its just something that Powershell can’t do.

However, you can remotely execute another tool, PSExec, and specify that it run a process locally in an interactive session.

$testExecute = 'C:\Program Files (x86)\SmartBear\TestExecute 10\Bin\TestExecute.exe'
$testExecuteProjectFolderPath = "$functionalTestDefinitionsDirectoryPath\Application"
$testExecuteProject = "$testExecuteProjectFolderPath\ApplicationTests.pjs"
$testExecuteResultsFilePath = "$functionalTestDefinitionsDirectoryPath\TestResults.mht"

write-host "Running tests at [$testExecuteProject] using TestExecute at [$testExecute]. Results going to [$testExecuteResultsFilePath]."
# Psexec does a really annoying thing where it writes information to STDERR, which Powershell detects as an error
# and then throws an exception. The 2>&1 redirects all STDERR to STDOUT to get around this.
# Bit of a dirty hack here. The -i 2 parameter executes the application in interactive mode specifying
# a pre-existing session with ID 2. This is the session that was setup by creating a remote desktop
# session before this script was executed. Sorry.
& "C:\Tools\sysinternals\psexec.exe" -accepteula -i 2 -h -u $remoteUser -p $remotePassword "$testExecute" "$testExecuteProject" /run /SilentMode /exit /DoNotShowLog /ExportLog:$testExecuteResultsFilePath 2>&1 | write-host
[int]$testExecuteExitCode = $LASTEXITCODE

The –i [NUMBER] in the command above tells PSExec to execute the process in an interactive user session, specifically the session with the ID specified. I’ve hardcoded mine to 2, which isn’t great, but works reliably in this environment because the remote desktop session I create after spinning up the instance always ends up with ID 2. Hacky.

Remote desktop session you may ask? In order for TestComplete (well TestExecute technically) to execute the tests correctly you need to actually have a desktop session setup. I assume this is related to it hooking into the UI user mouse and keyboard hooks or something, I don’t really know. All I know is that it didn't work without a remote desktop session of some sort.

On the upside, you can automate the creation of a remote desktop session with a little bit of effort, although there are two hard bits to be aware of.

Who Are You?

There is no obvious way to supply credentials to the Remote Desktop client (mstsc.exe). You can supply the machine that you want to make the connection to (thank god), but not credentials. I think there might be support for storing this information in an RDP file though, which seem to be fairly straightforward. As you can probably guess from my lack of knowledge about that particular approach, that’s not what I ended up doing.

I still don’t fully understand the solution, but you can use the built in windows utility cmdkey to store credentials for things. If you store credentials for the remote address that you are connecting to, the Remote Desktop client will happily use them.

There is one thing you need to be careful with when using this utility to automate Remote Desktop connections. If you clean up after yourself (by deleting the stored credentials after you use them) make sure you wait until the remote session is established. If you delete the credentials before the client actually uses them you will end up thinking that the stored credentials didn’t work, and waste a day investigating and trialling VNC solutions which ultimately don’t work as well as Remote Desktop before you realise the stupidity of your mistake. This totally happened to a friend of mine. Not me at all.

Anyway, the entirety of the remote desktop script (from start-remote-session.ps1):

param (
    [Parameter(Mandatory=$true,Position=0)]
    [Alias("CN")]
    [string]$ComputerNameOrIp,
    [Parameter(Mandatory=$true,Position=1)]
    [Alias("U")] 
    [string]$User,
    [Parameter(Mandatory=$true,Position=2)]
    [Alias("P")] 
    [string]$Password
)

& "$($env:SystemRoot)\system32\cmdkey.exe" /generic:$ComputerNameOrIp /user:$User /pass:$Password | write-host

$ProcessInfo = new-object System.Diagnostics.ProcessStartInfo

$ProcessInfo.FileName = "$($env:SystemRoot)\system32\mstsc.exe" 
$ProcessInfo.Arguments = "/v $ComputerNameOrIp"

$Process = new-object System.Diagnostics.Process
$Process.StartInfo = $ProcessInfo
$startResult = $Process.Start()

Start-Sleep -s 15

& "$($env:SystemRoot)\system32\cmdkey.exe" /delete:$ComputerNameOrIp | write-host

return $Process.Id

Do You Trust Me?

The second hard thing with the Remote Desktop client is that it will ask you if you trust the remote computer if you don’t have certificates setup. Now I would typically advise that you setup certificates for this sort of thing, especially if communicating over the internet, but in this case I was remoting into a machine that was isolated from the internet within the same Amazon virtual network, so it wasn’t necessary.

Typically, clicking “yes” on an identity verification dialog isn't a problem, even if it is annoying. Of course, in a fully automated environment, where I am only using remote desktop because I need a desktop to actually be rendered to run my tests, it’s yet another annoying thing I need to deal with without human interaction.

Luckily, you can use a registry script to disable the identity verification in Remote Desktop. This had to be done on the build agent instance (the component that actually executes the functional tests).

Windows Registry Editor Version 5.00
[HKEY_CURRENT_USER\Software\Microsoft\Terminal Server Client]
    "AuthenticationLevelOverride"=dword:00000000

Report Soldier!

With the tests actually running reliably (after all the tricks and traps mentioned above), all that was left was to report the results to TeamCity.

It was trivial to simply get an exit code from TestExecute (0 = Good, 1 = Warnings, 2 = Tests Failed, 3 = Tests Didn’t Run). You can then use this exit code to indicate to TeamCity whether or not the tests succeeded.

if ($testResult -eq $null)
{
    throw "No result returned from remote execution."
}

if ($testResult.Code -ne 0)
{
    write-host "##teamcity[testFailed name='$teamCityFunctionalTestsId' message='TestExecute returned error code $($testResult.Code).' details='See artifacts for TestExecute result files']"
}
else
{
    write-host "##teamcity[testFinished name='$teamCityFunctionalTestsId'"
}

That's enough to pass or fail a build.

Of course, if you actually have failing functional tests you want a hell of a lot more information in order to find out whythey failed. Considering the virtual machine on which the tests were executed will have been terminated at the end of the test run, we needed to extract the maximum amount of information possible.

TestComplete (TestExecute) has two reporting mechanisms, not including the exit code from above.

The first is an output file, which you can specify when running the process. I chose an MHT file, which is a nice HTML document showing the tests that ran (and failed), and which has embedded screenshots taken on the failing steps. Very useful.

The second is the actual execution log, which is attached to the TestComplete project. This is a little harder to use, as you need to take the entire project and its log file and open it in TestComplete, but is great for in depth digging, as it gives a lot of information about which steps are failing and has screenshots as well.

Both of these components are zipped up on the functional tests worker and then placed into S3, so the original script can download them and attach them the TeamCity build artefacts. This is essentially the same process as for getting the test definitions and installer to the functional tests worker, but in reverse, so I won’t go into any more detail about it.

Summary

So, after all is said and done, I had automated:

  • The creation of a virtual machine for testing.
  • The installation of the latest build on that virtual machine.
  • The execution of the functional tests.
  • The reporting of the results, in both a programmatic (build pass/fail) and human readable way.

There were obviously other supporting bits and pieces around the core processes above, but there is little point in mentioning them here in the summary.

Conclusion

All up, I spent about 2 weeks of actual time on automating the functional tests.

A lot of the time was spent familiarising myself with a set of tools that I’d never (or barely) used before, like TestComplete (and TestExecute), the software under test, Powershell, programmatic access to Amazon EC2 and programmatic access to Amazon S3.

As with all things technical, I frequently ran into roadblocks and things not working the way that I would have expected them too out of the box. These things are vicious time sinks, involving scouring the internet for other people who’ve had the same issue and hoping to all that is holy that they solved their problem and then remembered to come back and share their solution.

Like all things involving software, I fought complexity every step of way. The two biggest offenders were the complexity in handling errors in Powershell in a robust way (so I could clean up my EC2 instances) and actually getting TestExecute to run the damn tests because of its interactivity requirements.

When all was said and done though, the functional tests are now an integral part of our build process, which means there is far more incentive to adding to them and maintaining them. I do have some concerns about their reliability (UI focused tests are a bit like that), but that can be improved over time.