Like me, I assume you get into a situation sometimes where you want to execute a script, but you definitely don’t want some of its more permanent side effects to happen. Scripts can do all sorts of crazy things, like commit files into git, make changes to the local machine or publish files into production and you definitely don’t want those things to happen when you’re not intending them to.
This becomes even more important when you want to start writing tests for your scripts (or at least their components, like functions). You definitely don’t want the execution of your tests to change something permanently, especially if it changes the behaviour of the system under test, because then the next time you run the test its giving you a different result or executing a different code path. All things you want to avoid to get high quality tests.
In my explorations of the issue, I have come across two solutions. One helps to deal with side effects during tests and the other gives you greater control over your scripts, allowing you to develop them with some security that you are not making unintended changes.
Testing the Waters
I’ve recently started using Pester to test my Powershell functions.
The lack of testing (other than manual testing of course) in my script development process was causing me intense discomfort, coming from a strong background where I always wrote tests (before, after, during) whenever I was developing a feature in C#.
Its definitely improved my ability to write Powershell components quickly and in a robust way, and has improved my ability to refactor, safe in the knowledge that if I mess it up (and in a dynamic scripting language like Powershell with no Visual Studio calibre IDE, you will mess it up) the tests will at least catch the most boneheaded mistakes. Maybe even the more subtle mistakes too, if you write good tests.
Alas, I haven’t quite managed to get a handle on how to accomplish dependency injection in Powershell, but I have some ideas that may turn up in a future blog post. Or they may not, because it might be a terrible idea, only time will tell.
To tie this apparently pointless and unrelated topic back into the blog post, sometimes you need a way to control the results returned from some external call or to make sure that some external call doesn’t actually do anything. Luckily, Powershell being a dynamic language, you can simply overwrite a function definition in the scope of your test. I think.
I had to execute a Powershell script from within TestComplete recently, and was surprised when my trusty calls to write-host (for various informational logging messages) would throw errors when the script was executed via the Powershell object in System.Management.Automation. The behaviour makes perfect sense when you think about it, as that particular implementation of a host environment simply does not provide a mechanism to output anything not through the normal streams. I mention this because it was a problem that I managed to solve (albeit in a hacky way) by providing a blank implementation of write-host in the same session as my script, effectively overriding the implementation that was throwing errors.
Pester provides a mechanism for doing just this, through the use of Mocks.
I’d love to write a detailed example of how to use Mocks in Pester here, but to be honest, I haven’t had the need as of yet (like I said, I’ve only very recently started using Pester). Luckily the Pester wiki is pretty great, so there’s enough information there if you want to have a read.
I’m very familiar with mocking as a concept though, as I use it all the time in my C# tests. I personally am a fan of NSubstitute, but I’ve used Moq as well.
The only point I will make is that without dependency injection advertising what your components dependencies are, you have to be aware of its internal implementation in order to Mock out its dependencies. This makes me a little bit uncomfortable, but still, being able to Mock those dependencies out instead of having them hardcoded is much preferred.
Zhu Li, Do the Thing (But Not the Other Thing)
The second approach that I mentioned is a concept that is built into Powershell, which I have stolen and bastardised for my own personal gain.
A lot of the pre-installed Powershell components allow you to execute the script in –WhatIf mode.
WhatIf mode essentially runs the script as normal, except it doesn’t allow it to actually make any permanent changes. It’s up to the component exactly what it considers to be permanent changes, but its typically things like changing system settings and interacting with the file system. Instead it just writes out messages to whatever appropriate stream stating the action that would have normally occurred. I imagine that depending on how your component is written it might not react well to files it asks to be created or deleted not occurring as expected, but its still an interesting feature all the same.
In my case, I had a build and publish script that changed the AssemblyInfo of a C# project and then committed those changes to git, as well as tagging git with a build number when the publish completed successfully. I had to debug some issues with the script recently, and I wanted to run it without any of those more permanent changes happening.
This is where I leveraged the –WhatIf switch, even though I used it in a slightly different way, and didn’t propagate the switch down to any components being used by my script. Those components were mostly non-powershell, so it wouldn’t have helped anyway (things like git, MSBuild and robocopy).
I used the switch to turn off the various bits that made more permanent changes, and to instead output a message through the host describing the action that would have occurred. I left in the parts that made permanent changes to the file system (i.e. the files output from MSbuild) because those don’t have any impact on the rest of the system.
Of course you still need to test the script as a whole, which is why we have a fully fledged development environment that we can freely publish to as much as we like, but its still nice to execute the script safe in the knowledge that it’s not going to commit something to git.
I’ve found the WhatIf approach to be very effective, but it relies entirely on the author of the script to select the bits that they thing are permanent system changes and distinguish them from ones that are not as permanent (or at least easier to deal with, like creating new files). Without a certain level of analysis and thinking ahead, the approach obviously doesn’t work.
I’ve even considered defaulting WhatIf to on, to ensure that its a conscious effort to make permanent changes, just as a safety mechanism, mostly to protect future me from running the script in a stupid way.
When programming, its important to be aware of and to limit any side effects of the code that you have written, both for testing and development. The same holds true of scripts. The complication here is that scripts tend to bundle up lots of changes to the system being acted upon as their entire purpose, so you have to be careful with selecting which effects you want to minimise while developing.
In other news, I’ve been working on setting up a walking skeleton for a new service that my team is writing. Walking skeleton is a term referenced in Growing Object Oriented Software, Guided By Tests, and describes writing the smallest piece of functionality possible first, and then ensuring that the entire build and deployment process is created and working before doing anything else.
I suspect I will make a series of blog posts about that particular adventure.
Spoiler alert, it involves AWS and Powershell.