0 Comments

I know just enough of the windows scripting language (i.e. batch files) to get by. I’ve written a few scripts using it (at least one of which I’ve already blogged about), but I assume there is a world of deeper understanding and expertise there that I just haven’t fathomed yet. I’m sure its super powerful, but it just feels so…archaic.

Typically what happens is that I will find myself with the need to automate something, and two options will come to mind:

  1. I could write a batch file. This is always a struggle, because I write C# code all day, and going back to the limited functionality of the windows scripting language is painful. I know that if I want to do something past a certain point of complexity, I’ll need to combine this approach with something else.
  2. I could write a C# application of some sort (probably console) that does the automation. C# is where most of my skills lie, but it seems like I’m overcomplicating a situation when I build an entire C# console application. Additionally, if I’m automating something, I want it to be as simple and readable as possible for the next person (which may be me in 3 months) and encapsulating a bunch of automation logic into a C# application is not immediately discoverable.

I usually go with option 1 (batch file), which to me, is the lesser of two evils.

Enter Powershell

I’ve always secretly known that there are more than 2 options, probably many more.

At the very least there is a 3rd option:

  1. Use Powershell. Its just like a batch file, except completely different. You can leverage the .NET framework, and you get a lot more useful built in commands.

For me, the downside of using Powershell has always been the learning curve.

Every time I go to solve a problem that Powershell might be useful for, I can never justify spending the time to learn Powershell, even just the minimum needed to get the job done.

I finally got past that particular mental block recently when I wanted to write an automated build script that did the following:

  1. Update a version.
  2. Commit changes to git.
  3. Build a C# library project.
  4. Package the library into a NuGet package.
  5. Track the package in git for later retrieval.

There’s a lot of complexity hidden in those 5 statements, enough such that I knew I wouldn’t be able to accomplish it using just the vanilla windows scripting language. I resolved that this was definitelynot something that should be hidden inside the source code of an application, so it was time to finally go nuclear and learn how to do the Powershell.

Getting Started

The last time I tried to use Powershell was a…while ago. Long enough such that it wasn’t guaranteed that a particular computer would have Powershell installed on it. That’s pretty much not true anymore, so you can just run the “powershell” command from the command line to enter the Powershell repl. Typing “exit” leaves Powershell and returns you back to your command prompt.

Using Powershell on the command line is all well and good for exploration, but how can I use it for scripting?

powershell -Executionpolicy remotesigned -File [FileName]

The –Executionpolicy flag makes it so that you can actually run the script file. By default Powershell has the Restricted policy set, meaning scripts will not run.

Anyway, seems straightforward enough, so without further ado, I’ll show you to the finished Powershell script to accomplish the above, and then go through it in more detail.

The Script

param ( [switch]$release ) $gitTest = git if($gitTest -match "'git' is not recognized as an internal or external command") { write-error "Cannot find Git in your path. You must have Git in your path for this package script to work." exit } # Check for dirty git index (i.e. uncommitted, unignored changes). $gitStatus = git status --porcelain if($gitStatus.Length -ne 0) { write-error "There are uncommitted changes in the working directory. Deal with them before you package, or the tag that's made in git as a part of a package will be incorrect." exit } $currentUtcDateTime = (get-date).ToUniversalTime() $assemblyInfoFilePath = "[PROJECT PATH]\Properties\AssemblyInfo.cs" $assemblyVersionRegex = "(\[assembly: AssemblyVersion\()(`")(.*)(`"\))" $assemblyInformationalVersionRegex = "(\[assembly: AssemblyInformationalVersion\()(`")(.*)(`"\))" $existingVersion = (select-string -Path $assemblyInfoFilePath -Pattern $assemblyVersionRegex).Matches[0].Groups[3] $existingVersion = new-object System.Version($existingVersion) "Current version is [" + $existingVersion + "]." $major = $existingVersion.Major $minor = $existingVersion.Minor $build = $currentUtcDateTime.ToString("yy") + $currentUtcDateTime.DayOfYear $revision = [int](([int]$currentUtcDateTime.Subtract($currentUtcDateTime.Date).TotalSeconds) / 2) $newVersion = [System.String]::Format("{0}.{1}.{2}.{3}", $major, $minor, $build, $revision) "New version is [" + $newVersion + "]." "Replacing AssemblyVersion in [" + $assemblyInfoFilePath + "] with new version." $replacement = '$1"' + $newVersion + "`$4" (get-content $assemblyInfoFilePath) | foreach-object {$_ -replace $assemblyVersionRegex, $replacement} | set-content $assemblyInfoFilePath if ($release.IsPresent) { $newInformationalVersion = $newVersion } else { write-host "Building prerelease version." $newInformationalVersion = [System.String]::Format("{0}.{1}.{2}.{3}-pre", $major, $minor, $build, $revision) } "Replacing AssemblyInformationalVersion in [" + $assemblyInfoFilePath + "] with new version." $informationalReplacement = '$1"' + $newInformationalVersion + "`$4" (get-content $assemblyInfoFilePath) | foreach-object {$_ -replace $assemblyInformationalVersionRegex, $informationalReplacement} | set-content $assemblyInfoFilePath "Committing changes to [" + $assemblyInfoFilePath + "]." git add $assemblyInfoFilePath git commit -m "SCRIPT: Updated version for release package." $msbuild = 'C:\Program Files (x86)\MSBuild\12.0\bin\msbuild.exe' $solutionFile = "[SOLUTION FILENAME]" .\tools\nuget.exe restore $solutionFile & $msbuild $solutionFile /t:rebuild /p:Configuration=Release if($LASTEXITCODE -ne 0) { write-host "Build FAILURE" -ForegroundColor Red exit } .\tools\nuget.exe pack [PATH TO PROJECT FILE] -Prop Configuration=Release -Symbols write-host "Creating git tag for package." git tag -a $newInformationalVersion -m "SCRIPT: NuGet Package Created."

[Wall of text] crits [reader] for [astronomical amount of damage].

To prevent people from having to remember to run Powershell with the correct arguments, I also created a small batch file that you can just run by itself to execute the script.

@ECHO OFF

powershell -Executionpolicy remotesigned -File _Package.ps1 %*

As you can see, the batch script is straightforward. All it does is call the Powershell script, passing in any arguments that were passed to the batch file (that’s the %* at the end of the line).

Usage is:

package // To build a prerelease, mid-development package.

package –release // To build a release package, intended to be uploaded to NuGet.org.

Parameters

The first statement at the top defines parameters to the script. In this case, there is only one parameter, and it defines whether or not the script should be run in release mode.

I wasn’t comfortable with automatically making every single package built using the script a release build, because it meant that if I automated the upload to NuGet.org at some later date, I wouldn’t be able to create a bunch of different builds during development without potentially impacting on people actually using the library (they would see new versions available and might update, which would leave me having to support every single package I made, even the ones I was doing mid-development). That’s less than ideal.

The release flag determines whether or not the AssemblyInformationalVersion has –pre appended to the end of the version string. NuGet uses the AssemblyInformationalVersion in order to define whether or not the package is a prerelease build, which isolates it from the normal stream of packages.

Checks

Because the script is dependent on a couple of external tools that could not be easily included in the repository (git in particular, but also MSBuild) I wanted to make sure that it failed fast if those tools were not present.

I’ve only included a check for git because I assume that the person running the script has Visual Studio 2013 installed, whereas git needs to be in the current path in order for the script to do what it needs to do.

The other check that the script does is check to see whether or not there are any uncommitted changes.

I do this because one of the main purposes of this build script is to build a library and then mark the source control system so that the source code for that specific version can be retrieved easily. Without this check, someone could use the script with uncommitted local changes and the resulting tag would not actually represent the contents of the package. Super dangerous!

Versioning

In this particular case, versioning is (yet again) a huge chunk of the script, as it is intended to build a library for distribution.

The built in automatic versioning for .NET is actually pretty good. The problem is, I have never found a way to use that version easily from a build script and the version is never directly stated in the AssemblyInfo file, so you can’t see the version at a glance just by reading the code. I need more control than that.

The algorithm that the .NET framework uses is (partially) explained in the documentation for AssemblyVersion.

To summarise:

  1. The version is of the form [MAJOR].[MINOR].[BUILD].[REVISION].
  2. You can substitute a * for either (or both of) BUILD and REVISION.
  3. BUILD is automatically set to the number of days since 1 January 2000.
  4. REVISION is automatically set to the number of seconds since midnight / 2.

The algorithm I implemented in the script is a slight modification of that, where BUILD is instead set to YYDDD.

Anyway, Powershell makes the whole process of creating this new version much much easier than it would be a normal batch file, primarily because of the ability to use the types and functions in the .NET framework. Last time I tried to give myself more control over versioning I had to write a custom MSBuild task.

The script grabs the version currently in the AssemblyVersion attribute of the specified AssemblyInfo file using the select-string cmdlet. It extracts the MAJOR and MINOR numbers from the existing version (using the .NET Version class) and then creates a string containing the new version.

Finally, it uses a brute force replacement approach to jam the new version back into the AssemblyVersion attribute, using the same regular expression. I’ll be brutally honest, I don’t understand the intricacies of the way in which it does the replacement, just that it reads all of the lines from the file, modifies any that match the regular expression, then writes them all back, effectively overwriting the entire file. I wouldn’t recommend this approach for any serious replacement, but AssemblyInfo is a very small file, so it doesn’t matter all that much here.

Some gotchas here. Initially I broke the regular expression into 3 groups. Left of the version, the version and right of the version. However, when it came time to do the replace, I could not create a replacement string using the first capture group + the new version because the resulting string came out like this “$11.2.14309.2306”. When Powershell/.NET tried to substitute the capture groups in, it tried to substitute the $11 group, which didn’t exist. Simply adding whitespace would have broken the version in the file, so I had to break the regular expression into 3 groups, one of which is just the single double quotes to the left of the version. When it comes time to do the replacement, I just manually insert that quote and that worked. A bit nastier than I would like, but ah well.

The version update is then duplicated for the AssemblyInformationalVersion, with the previously mentioned release/prerelease changes.

Source Control

A simple git add and commit to ensure that the altered AssemblyInfo file is in source control, ready to be tagged after the build is complete. I prepended the commit message with “SCRIPT:” so that its easy to tell which commits were done automatically when looking at the git log output.

Build

Nothing fancy, just a normal MSBuild execution, preceded by a NuGet package restore.

I struggled with calling MSBuild correctly from the script for quite a while. For some reason Powershell just would not let me call it with the appropriate parameters. Eventually I stumbled onto this solution. & is simply the call operator.

The script checks that there weren’t any errors during the build, because there would be no point in going any further if there was. The $LASTEXITCODE variable is a handy little variable that tracks the last exit code from a call.

Packaging

Simple NuGet package command. I use –Symbols because I prefer NuGet packages that include source code, so that they are easier to debug. This is especially useful for a library.

Source Control (again)

If we got this far, we need to create a record that the package was created. A simple git tag showing the same version as the AssemblyInformationalVersion is sufficient.

Summary

As you can clearly see, the script is not perfect. Honestly, I’m not even sure if its good. At the very least it gets the job done. I’m sure as I continue to use it for its intended purpose I will come up with ways to improve it and make it clearer and easier to understand.

Regardless of that, Powershell is amazing! The last time I tried to solve this problem I wrote a custom MSBuild task to get the versioning done. That was a lot more effort than the versioning in this script, and much harder to maintain moving forward. The task was better structured though, so that’s definitely an area where this script could use some improvement. Maybe I can extract the versioning code out into a function? Another file maybe? I should almost certainly run the tests before packaging as well, no point in making a package where the library has errors that could have been picked up. Who knows, I’m sure I’ll come up with something.

You may also ask why I went to all this trouble when I should be using a build server of some description.

I agree, I should be using a build server. For the project that this build script was written for I don’t really have the time or resources to put one into place…yet.

0 Comments

ClickOnce seems to be quite maligned on the internet, but I think its a nice simple publishing technology, as long as your application is small and doesn’t need to do anything fancy on installation. It offers automatic updating as well, which is very nice.

Anyway I was getting annoyed that the only way I could deploy a new version of this desktop WPF app was by going into Visual Studio, right-click the Project –> Properties –> Publish. Then I had to go in and do things with the version, and make sure the other settings were correct.

It was all just too many clicks and too much to remember.

Complicating matters is that the app has 3 different build configurations, development, staging and release. Switching to any one of those build configurations changed the ClickOnce publish settings and the API endpoint used by the application. However, sometimes Visual Studio would just forget to update some of the ClickOnce publish settings, which caused me to publish a development application to the staging URL a couple of times (and vice versa). You had to actually reload the project or restart Visual Studio in order to guarantee that it would deploy to the correct location with the correct configuration. Frustrating (and dangerous!).

A little bit more information about the changes that occur as a result of selecting a different build configuration.

The API endpoint is just an app setting, so it uses Slow Cheetah and config transforms.

The publish URL (and supporting ClickOnce publish information) is stored in the csproj file though, so it uses a customised targets file, like this:

<Project ToolsVersion="3.5" xmlns="http://schemas.microsoft.com/developer/msbuild/2003">
    <PropertyGroup Condition="'$(Configuration)' == 'development-release'">
        <PublishUrl>[MAPPED CLOUD DRIVE]\development\</PublishUrl>
        <InstallUrl>[PUBLICALLY ACCESSIBLE INSTALL URL]/development/</InstallUrl>
    </PropertyGroup>
    <PropertyGroup Condition="'$(Configuration)' == 'staging-release'">
        <PublishUrl>[MAPPED CLOUD DRIVE]\staging\</PublishUrl>
        <InstallUrl>[PUBLICALLY ACCESSIBLE INSTALL URL]/staging/</InstallUrl>
    </PropertyGroup>
    <PropertyGroup Condition="'$(Configuration)' == 'production-release'">
        <PublishUrl>[MAPPED CLOUD DRIVE]\production\</PublishUrl>
        <InstallUrl>[PUBLICALLY ACCESSIBLE INSTALL URL]/production/</InstallUrl>
    </PropertyGroup>
</Project>

This customised targets file was included in the csproj file like this:

<Import Project="$(ProjectDir)\Customized.targets" />

So, build script time! Mostly to make doing a deployment easier, but hopefully also to deal with that issue of selecting a build configuration and not having the proper settings applied.

Like everything involving software, many Yaks were shaved as part of automating this deployment.

Automation

My plan was to create 3 scripts, one to deploy to each environment. Those 3 scripts should use a single script as a base, so I don’t create a maintenance nightmare. This script would have to be parameterised around the target configuration. Sounds simple enough.

Lets look at the finished product first, then we’ll go into each section in detail.

@ECHO OFF

SET publish_type=%1
SET install_url=[PUBLICALLY ACCESSIBLE INSTALL URL]/%publish_type%/
SET configuration=%publish_type%-release
SET remote_publish_destination=[MAPPED CLOUD DRIVE]\%publish_type%\

SET timestamp_file=publishDate.tmp
tools\date.exe +%%Y%%m%%d%%H%%M%%S > %timestamp_file%
SET /p timestamp_directory= < %timestamp_file%
DEL %timestamp_file%

REM %~dp0 is the directory containing the batch file, which is the Solution Directory.
SET publish_output=%~dp0publish\%publish_type%\%timestamp_directory%\
SET msbuild_output=%publish_output%msbuild\

tools\NuGet.exe restore [SOLUTION FILE]

"C:\Program Files (x86)\MSBuild\12.0\bin\msbuild.exe" [SOLUTION FILE] /t:clean,rebuild,publish /p:Configuration=%configuration%;PublishDir=%msbuild_output%;InstallUrl=%install_url%;IsWebBootstrapper=true;InstallFrom=Web

IF ERRORLEVEL 1 (
    ECHO Build Failure. Publish terminated.
    EXIT /B 1
)

REM Add a small (1 second) delay here because sometimes the robocopy fails to delete the files its moving because they are in use (probably by MSBUILD).
TIMEOUT /T 1

robocopy %msbuild_output% %remote_publish_destination% /is /E

Not too bad. It fits on one screen, which is a nice. The script itself doesn’t actually do all that much, just leverages MSBUILD and the build configurations that were already present in the solution.

The script lives inside the root directory of my solution, and it does everything relative to the directory that it’s in (so you can freely move it around). Scripts with hardcoded directories are such a pain to use, and there’s not even a good reason to do it. Its just as easy to do a relative script.

Alas, its not perfectly self contained. It is reliant on Visual Studio 2013 being installed, and a couple of other things (which I will mention later). Ah well.

Variables

First up, the script sets some variables that it needs to work with. The only parameter supplied to the script is the publish or deployment type (for me that’s development, staging or release). It then uses this value to select the appropriate build configuration (because they are named similarly) and the final publish location (which is a publically accessible URL).

Working Directory

Secondly, the script creates a working directory using the type of publish being performed and the current date and time. I’ve used the “date” command-line tool for this, which I extracted (stole) from a set of Unix tools that were ported to Windows. Its completely self contained (no dependencies, yeah!) so I’ve just included it in the tools directory of my repository. If you’re wondering why it creates a file to put the timestamp into, this was because I had some issues with the timestamp not evaluating correctly when I just put it directly inside a variable. The SET /P line allows you to set a variable using input from the user, and the input that it is supplied with is the contents of the file (using the < operator). More tricksy than I would like, but it gets the job done.

My other option was to write a simple C# command line application to get the formatted timestamp for the directory name myself, but this was a good exercise in exploring batch files. I suppose I could have also used ScriptCS to just do it inline (or Powershell) but that would have taken even more time to learn (I don’t have a lot of Powershell experience). This was the simplest and quickest solution in the end.

NuGet Package Restore

Third, the script restores the packages that the solution uses via NuGet. There are a couple of ways that you can do this inside the actual csproj files in the solution (using the MSBUILD targets and tasks), but I find that it’s just easier to call NuGet.exe directly from the command line, especially if you’re already in build script land. Much more obvious about what’s going on.

Build and Publish

Fourth, we finally get to the meat of the build script, where it farms out the rebuild and publish to MSBUILD. You can set basically any property from the command line when doing a build through MSBUILD, and in this case it sets the selected build configuration, a local directory to dump the output to and some properties related to the eventual location of the deployed application.

The reason it sets the IsWebBootstrapper and InstallFrom properties on the command line is because I’ve specifically set the ClickOnce deployment property values in the project file to be non-functional. This is to prevent people from publishing without using the script, which as mentioned previously, can actually be a risky proposition due to the build configurations.

The build and publish is more complicated than it appears though, and the reason for that is versioning.

Versioning

Applications deployed through ClickOnce have 2 version related attributes.

The first is the ApplicationVersion, and the second is MinimumRequiredVersion.

ApplicationVersion is the actual version of the application that you are deploying. Strangely enough, this is NOT the same as the version defined in the AssemblyInfo file of the project. This means that you can publish Version 1.0.0.0 of a ClickOnce application and have the actual deployed exe not be that version. In fact, that’s the easiest path to take. It takes significantly more effort to synchronize the two.

I don’t like that.

Having multiple identifiers for the same piece of software is a nightmare waiting to happen. Especially considering that when you actually try to reference the version from inside some piece of C#, you can either use the normal way (checking the version of the executing assembly) or you can check the ClickOnce deployment version.

Anyway, MinimumRequiredVersion is for forcing users of the ClickOnce application to update to a specific version. In this case, the product owner required that the user always be using the latest version (which I agree with), so MinimumRequiredVersion needed to be synchronized with ApplicationVersion.

ClickOnce seems to assume that someone will be manually setting the ApplicationVersion (and maybe also the MinimumRequiredVersion) before a deployment occurs, and isn’t very friendly to automation.

I ended up having to write a customised MSBUILD task. Its nothing fancy (and looking back at it, I’m pretty sure there are many, better, ways to do it, maybe even using the community build tasks) but it gets the job done. You can see the source code of the build task here.

It takes a path to an AssemblyInfo file, reads the AssemblyVersion attribute from it, sets the build and revision versions to appropriate values (build is set to YYDDD i.e. 14295, revision is set to a monotonically increasing number, which is reset to 0 on the first build of each day), writes the version back to the AssemblyInfo file and then outputs the generated version, so that it can be used in future build steps.

I use this custom task in a customised.targets file which is included in the project file for the application (in the same way as the old project customisations were included above).

This is what the targets file looks like.

<?xml version="1.0" encoding="utf-8"?>
<Project xmlns="http://schemas.microsoft.com/developer/msbuild/2003">
    <PropertyGroup>
        <GeneratedAssemblyVersion></GeneratedAssemblyVersion>
    </PropertyGroup>
    <UsingTask TaskName="Solavirum.Build.MSBuild.Tasks.ReadUpdateSaveAssemblyInfoVersionTask" AssemblyFile="$(SolutionDir)\tools\Solavirum.Build.MSBuild.Tasks.dll" />
    <Target Name="UpdateVersion">
        <Message Text="Updating AssemblyVersion in AssemblyInfo." Importance="high" />
        <ReadUpdateSaveAssemblyInfoVersionTask AssemblyInfoSourcePath="$(ProjectDir)\Properties\AssemblyInfo.cs">
            <Output TaskParameter="GeneratedVersion" PropertyName="GeneratedAssemblyVersion" />
        </ReadUpdateSaveAssemblyInfoVersionTask>
        <Message Text="New AssemblyVersion is $(GeneratedAssemblyVersion)" Importance="high" />
        <Message Text="Updating ClickOnce ApplicationVersion and MinimumRequiredVersion using AssemblyVersion" Importance="high" />
        <CreateProperty Value="$(GeneratedAssemblyVersion)">
            <Output TaskParameter="Value" PropertyName="ApplicationVersion" />
        </CreateProperty>
        <CreateProperty Value="$(GeneratedAssemblyVersion)">
            <Output TaskParameter="Value" PropertyName="MinimumRequiredVersion" />
        </CreateProperty>
        <!-- 
        This particular property needs to be set because of reasons. Honestly I'm not sure why, but if you dont set it
        the MinimumRequiredVersion attribute does not appear correctly inside the deployment manifest, even after setting
        the apparently correct propery above.
        -->
        <CreateProperty Value="$(GeneratedAssemblyVersion)">
            <Output TaskParameter="Value" PropertyName="_DeploymentBuiltMinimumRequiredVersion" />
        </CreateProperty>
    </Target>
    <Target Name="BeforeBuild">
        <CallTarget Targets="UpdateVersion" />
    </Target>
</Project>

Its a little hard to read (XML) and it can be hard to understand if you’re unfamiliar with the way that MSBUILD deals with…things. It has a strange way of taking output from a task and doing something with it, and it took me a long time to wrap my head around it.

From top to bottom, you can see that it creates a new Property called GeneratedAssemblyVersion and then calls into the custom task (which is available in a DLL in the tools directory of the repository). The version returned from the custom task is then used to set the ApplicationVersion and MinimumRequiredVersion properties (and to log some statements about what its doing in the build output). Finally it configures the custom UpdateVersion target to be executed before a build.

Note that it was insufficient to just set the MinimumRequiredVersion, I also had to set the _DeploymentBuiltMinimumRequiredVersion. That took a while to figure out, and I still have no idea exactly why this step is necessary. If you don’t do it though, your MinimumRequiredVersion won’t work the way you expect it to.

Now that we’ve gone on a massive detour around versioning, its time to go back to the build script itself.

Deployment

The last step in the build script is to actually deploy the contents of the publish directory filled by MSBUILD to the remote URL.

This publish directory typically contains a .application file (which is the deployment manifest), a setup.exe and an ApplicationFiles directory containing another versioned directory with the actual EXE and supporting files inside. Its a nice clean structure, because you can deploy over the top of a previous version and it will keep that old versions files around and only update the .application file (which defines the latest version and some other stuff).

For this application, the deployment URL is an Amazon S3 storage account, parts of which are publically accessible. I use TNTDrive to map a folder in the S3 account to a local drive, which in my case is Y.

With the deployment location available just like a local drive, all I need to do is use robocopy to move the files over.

I’ve got some basic error checking before the copy (just to ensure that it doesn’t try to publish if the build fails) and I included a short delay before the copy because of some issues I was having with the files being locked even though MSBuild had returned.

Future Points of Improvement

I don’t like the fact that someone couldn’t just clone the repository that the build script is inside and run it. That makes me sad. You have to install the TNTDrive client and then map it to the same drive letter as the script expects. A point of improvement for me would be to use some sort of portable tool to copy into an Amazon S3 account. I’m sure the tool exists, I just haven’t had a chance to look for it yet. Ahh pragmatism, sometimes I hate you.

A second point of improvement would be to make the script run the unit and integration tests and not publish if they fail. I actually had a prototype of this working at one point, but it needed more work and I had to pull back from it in order to get the script completed in time.

Lastly, it would be nice if I had a build server for this particular application. Something like TeamCity or AppVeyor. I think this is a great idea, but for this particular application (which I only work on contractually, for a few hours each week) its not really something I have time to invest in. Yet.

Conclusion

ClickOnce is a great technology, but it was quite an adventure getting a script based deployment to work. Documentation in particular is pretty lacklustre, especially around what exactly some of the deployment properties mean (and their behaviour).

Still, the end result solved my initial problems. I no longer have to worry about accidentally publishing an application to the wrong place, and I can just enter one simple command from the command line to publish. I even managed to fix up some versioning issues I had with the whole thing along the way.

Victory!