Long story short; I’m taking a break from writing posts.

There are bunch of contributing factors, but the most pertinent ones are:

  • I’m tired
  • I don’t have any good topics to talk about right now
  • I want to use my morning commute to read instead of writing

Barring sickness and holidays, I’ve written a post a week since my very first post back in September 2014, so lets do some quick and dirty math.

There are approximately 45 pages of blog posts, and at 5 posts per page that’s well over 200 posts. Each post is conservatively 800 words, but more likely to err on the side of 1000+ because I’m a verbose person and it takes me ages to get to the point. Just like it did in that sentence.

Even if I’m conservative, that’s still at least 200,000 words, and according to Wikipedia, that’s somewhere between four and five novels worth. More tellingly, its more words that you are typically allowed to write for a Ph.D dissertation according to a bunch of unnamed American universities.

Numbers and statistics are all well and good, but really I’m just trying to justify to myself that this is enough, so I don’t feel bad about not doing it for a while.

Maybe I’ll find something cool and worth talking about and I’ll be right back into it next week.

Maybe I’ll never write another word again, I don’t know.

Regardless of which path I take from here, I think that writing this blog has made me a better professional than I would have otherwise been, and I hope that at least one person on the interwebs has found the information therein useful.


It turns out that if you’re okay at being responsible for people and things, you eventually get made responsible for more people and things.

The real ramifications of that morequalifier are that are that you might very well end up with less time to engage in the activities that contributed towards your success in the first place.

I know its trite, but there might actually be a grain of truth in the old Peter Principle. If you keep getting promoted until you are in a position where you can no longer reliably engage in the activities that led to your success in the first place, its probably going to manifest as incompetence.

I’m sure you can guess why I’m mentioning this…

Alone I Break

Historically, I’m used to being able to absorb a information from multiple sources and retain it. I have something of a reputation for remembering a wide variety of things in a work context, and even outside work my head is filled with far more useless information than I really care to think about (there’s a lot of Warhammer lore in there for example, praise Sigmar).

The problem is, as I get involved in more things, I’m finding it more and more difficult to keep up with everything all at once.

Perhaps I’m getting old, and my brain is not as good as it used to be (very possible), but I think I’m just running up against the real limitations of my memory for the first time. Prior to this, I could always just focus down on one or two things at most, even though there was a lot of technical complexity in play.

The degradation was gradual.

The first thing to go was my ability to understand all the technical details about what was going on in all of the teams I was working with. That was a hard one to let go of, but at the end of the day I could still provide useful guidance and direction (where necessary) by lifting my focus and thinking about concepts at a higher level, ignoring the intricacies of implementation. Realistically, without the constantly tested and tuned technical skills acquired from actually implementing things, I wasn’t really in a position to help anyway, so its for the best.

The second thing that started to go though was the one-on-one interactions, and I can’t let that fly. I’ve been in situations before where I wasn’t getting clear and regular feedback from the people who were responsible for me, and I did not want to do the same thing to those I was responsible for. Being unable to stay on top of that really reinforced that I had to start doing something that I am utterly terrible at.


All In The Family

Its not that I’m not okay with delegating, I’m just bad at it.

There are elements of ego there (i.e. the classic “If I don’t do it, it won’t be done right!”), but I also just plain don’t like having to dump work on people. It doesn’t feel right.

But the reality is that I won’t always be around, and I can’t always pay the amount of attention to everything that I would like to, so I might as well start getting people to do things and make sure that I can provide the necessary guidance to help them along the path that I believe leads to good results.

The positive side of this is that it gives plenty of new opportunities for people to step up into leadership roles, and I get to be in the perfect position to mentor those people in the way that I believe that things should be done. Obviously this represents a significant risk to the business, if they don’t want things to continue to be done in the way that I do them, but they gave up the ability to prevent that when they put me into a leadership position.

With additional leaders in place, each being responsible for their own small groups of people, my role mutates into one of providing direction and guidance (and maybe some oversight), which is a bit of a change for someone that is used to being involved in things at a relatively low level.

And letting go is hard.

Got The Life

I’ve written before about how I’m pretty consistently terrified about micromanaging the people I’m responsible for and destroying their will to live, but I think now that being aware of that and being appropriately terrified probably prevents me from falling too far into that hole. That doesn’t mean I don’t step into the hole from time to time, but I seem to have avoided falling face first so far.

Being cognitively aware of things is often a good way to counter those things after all. Its hard to be insane when you realise that you’re insane.

So letting go is actually in my best interest, even if the end result of a situation is not necessarily what I originally envisioned. At the end of the day, if the objectives were accomplished in a sustainable fashion, it probably doesn’t really matter anyway. I’m still free to provide guidance, and if the teams involved take it as that (guidance), rather than as unbreakable commands, then I’m probably okay.

A healthy lack of involvement can also break negative patterns of reliance as well, making teams more autonomous. Within reason of course, as a lack of direction (and measurable outcomes) can be incredibly and rightly frustrating.

Its a delicate balancing act.

Be involved just enough to foster independent though and problem-solving, but no more than that so as to avoid creating stifling presence.


This is another one of those wishy-washy touchy-feely posts where I rant about things that I don’t really understand.

I’m trying though, and the more I think (and write) about the situation the better I can reason about it all.

The real kicker here is the realization that I can’t do everything all at once, especially as my area of responsibility widens.

The situation does offer new and interesting opportunities though, and helping people to grow is definitely one of the better ones.


Its approximately 6 months later, and our work based D&D (Dungeons and Dragons) groups are still going strong.

Everyone is having a lot of fun, players are forming relationships, ridiculous stories are occurring regularly and the campaigns are progressing nicely. Some of the groups have even finished the smaller adventures that they were running and are looking for new challenges.

Speaking of groups; there have been some minor mutations from group to group as far as people go, but overall they are mostly the same as they were in the beginning.

And therein lies something of an issue.

There Are *Rolls Dice* 5 People In Each Group

To be honest, we play a lot of D&D each week. We have a groups playing on Monday, Tuesday, Wednesday and Thursday, and there has been some interest in putting together a fifth group on Friday. Until we decide to pivot as a business into D&D related software (which I’m sure is only a matter of time), that’s probably as much D&D as we can squeeze in.

But those regular groups do lead to something of a problem; being mostly static, there is little room for new participants.

Think about it; We generally limit each group to 6 people (1 DM and 5 Players), but we have to keep a little bit of overlap between the groups so that the DM’s get to play as well, so with 4 active groups, we can really only involve 20ish unique people. That goes up to 25 with 5 groups obviously, but there is not much room left to grow at that stage.

I’ve got a bit of a plan to add some chaos into the whole situation later on this year (a full group reshuffle), but that is not going to magically allow a whole bunch of new people to participate, because I imagine that just about everyone will want to continue to play.

What I really want is another way to get people to play D&D that is more flexible than a long term campaign, letting people participate without having to make a long term commitment.

One Shot, One Kill

Of course there is, and they are called oneshots (well, I call them oneshots).

A oneshot is generally a single D&D session intended to only last a few hours, as opposed to one that lasts many sessions over the course of weeks or months or years). A short self-contained adventure that you can get just about anybody into with a little bit of preparation.

Now, because this blog tends to trail reality by a significant amount of time, I’ve actually been organising oneshots monthly since last September or so, so we’ve had a few at this point. They are typically on a Saturday, where I can safely steal one of the cool meeting rooms that we have at our office for an entire day without having to worry about stepping on anyone’s toes. Our meeting rooms are great; big tables, massive whiteboards and easy access to a kitchenette and facilities. Also free.

At this point, I’ve played in some of the oneshots and DMed in others and every time its been a pretty great experience.

The last oneshot I participated in took a completely ridiculous direction where our group decided that we were a rock band and that our agent had simply booked us a really crappy gig (we were in prison), but we were determined to put on a good show anyway. It only got more ridiculous from there, and the last encounter of the day was us having a rock battle with an ancient blue dragon, with each party member having to make up their part of the final song. Also a tamed a spider and we wrote a song called Rider of the Spider.

The best part is that because it requires limited commitment, a oneshot gives a much wider variety of people the opportunity to sign up, assuming they can sacrifice a Saturday. Additionally, it leaves room for partners and other family members, which is a great way to get to know someone.

Partners know all the deep dark secrets about your colleagues, and in my experience, love to share them.

The only complication that I’ve found, which isn’t really all that much of a complication, is that I need to organise and sign people up for oneshots months in advance. This helps people make arrangements with family as necessary, organizing babysitting and whatnot in order to be able to spend a day enjoying themselves.

To be clear, its February now and I have a oneshot planned for later this month. Attendance has been sorted for this oneshot since November last year, and I’ve already signed up a bunch of people for the July oneshot.

A New Challenger Approaches

Allowing more people the opportunity to attend is not the only benefit from the oneshots though.

Theoretically, oneshots provide the perfect environment for nascent DMs to get involved without having to commit to a long and sometimes gruelling multi-month campaign. They just need to come up with an idea (or steal something off the internet), do some prep and then execute it over the course of a few hours. Its a limited engagement almost perfect for new and inexperienced people to have a stab.

Now, I’m always interested in training up new DMs because without DMs, the whole D&D thing ceases to exist, and they are something of a rare breed (compared to players). We still lose people from time to time, and sometimes those people are my precious precious DMs. That’s not necessarily a bad thing though, as some amount of employee turnover is natural and healthy, and while the presence of a solid social component can and will reduce undesirable turnover, its never going to prevent it. People grow and change and move on and that’s okay.

So it helps to have a cupboard full of possible DMs.

Also, if I look at it entirely unselfishly for a few seconds, being able to DM can definitely lead to the creation of new skills that are useful outside D&D. So really I’m doing these people a favour.

Ahhhh, the sweet sound of an assuaged conscience.


I’m sure its obvious at this point that I want to get as many people involved with D&D as possible. Maybe we’ll even introduce other tabletop games at some point in the future, because really its not D&D specifically that is beneficial (though it is great), its the relationships and culture that it inspires via its collaborative storytelling. I’ve always wanted to play Shadowrun for example, as its such a cool setting, and I’m sure we’d see the same benefits regardless of which universe we’re using as a foundation.

Anyway, apart from the fact that I personally really enjoy both playing and DMing (though DMing can be exhausting sometimes), I really do believe that having a regular social activity like D&D is incredibly healthy for any organization. There are just so many great side-effects to establishing and maintaining positive relationships between colleagues through channels other than actual work.

With more people playing D&D than ever, I assume it can only get better from here.


It makes me highly uncomfortable if someone suggests that I support a piece of software without an automated build process.

In my opinion its one of the cornerstones on top of which software delivery is built. It just makes everything that comes afterwards easier, and enables you to think and plan at a much higher level, allowing you to worry about much more complicated topics.

Like continuous delivery.

But lets shy away from that for a moment, because sometimes you have to deal with the basics before you can do anything else.

In my current position I’m responsible for the engineering behind all of the legacy products in our organization. At this point in time those products make almost all of the money (yay!), but contain 90%+ of the technical terror (boo!) so the most important thing from my point of view is to ensure that we can at least deliver them reliably and regularly.

Now, some of the products are in a good place regarding delivery.

Some, however, are not.

Someones Always Playing Continuous Integration Games

One specific product comes to mind. Now, that’s not to say that the other products are perfect (far from it in fact), but this product in particular is lacking some of the most fundamental elements of good software delivery, and it makes me uneasy.

In fairness, the product is still quite successful (multiple millions of dollars of revenue), but from an engineering point of view, that’s only because of the heroic efforts of the individuals involved.

With no build process, you suffer from the following shortcomings:

  • No versioning (or maybe ad-hoc versioning if you’re lucky). This makes it hard to reason about what variety of software the customer has, and can make support a nightmare. Especially true when you’re dealing with desktop software.
  • Mysterious or arcane build procedures. If no-one has taken the time to recreate the build environment (assuming there is one), then it probably has all sorts of crazy dependencies. This has the side effect of making it really hard to get a new developer involved as well.
  • No automated tests. With no build process running the tests, if you do have tests, they are probably not being run regularly. That’s if you have tests at all of course, because with no process running them, people probably aren’t writing them.
  • A poor or completely ad-hoc distribution mechanism. Without a build process to form the foundation of such a process, the one that does exist is mostly ad-hoc and hard to follow.

But there is no point in dwelling on what we don’t have.

Instead, lets do something about it.

Who Cares They’re Always Changing Continuous Integration Names

The first step is a build script.

Now, as I’ve mentioned before on this blog, I’m a big fan of including the build script into the repository, so that anyone with the appropriate dependencies can just clone the repo and run the script to get a deliverable. Release candidates will be built on some sort of controlled build server obviously, but I’ve found its important to be able to execute the same logic both locally and remotely in order to be able to react to unexpected issues.

Of course, the best number of dependencies outside of the repository is zero, but sometimes that’s not possible. Aim to minimise them at least, either by isolating them and including them directly, or by providing some form of automated bootstrapping.

This particular product is built in a mixture of Delphi (specifically Delphi 7) and .NET, so it wasn’t actually all that difficult to use our existing build framework (a horrific aberration built in Powershell) to get something up and running fairly quickly.

The hardest past was figuring out how to get the Delphi compiler to work from the command line, while still producing the same output as it would if you just followed the current build process (i.e. compilation from within the IDE).

With the compilation out of the way, the second hardest part was creating an artifact that looked and acted like the artifact that was being manually created. This comes in the form of a self-extracting zip file containing an assortment of libraries and executables that make up the “update” package.

Having dealt with both of those challenges, its nothing but smooth sailing.

We Just Want to Dance Here, But We Need An AMI

Ha ha ha ha no.

Being a piece of legacy software, the second step to was to create a build environment that could be used from TeamCIty.

This means an AMI with everything required in order to execute the build script.

For Delphi 7, that means an old version of the Delphi IDE and build tools. Good thing we still had the CD that the installer came on, so we just made an ISO and remotely mounted it in order to install the required software.

Then came the multitude of library and tool dependencies specific to this particular piece of software. Luckily, someone had actually documented enough instructions on how to set up a development environment, so we used that information to complete the configuration of the machine.

A few minor hiccups later and we had a build artifact coming out of TeamCity for this product for the very first time.

A considerable victory.

But it wasn’t versioned yet.

They Call Us Irresponsible, The Versioning Is A Lie

This next step is actually still under construction, but the plan is to use the TeamCity build number input and some static version configuration stored inside the repository to create a SemVer styled version for each build that passes through TeamCity.

Any build not passing through TeamCity, or being built from a branch should be tagged with an appropriate pre-release string (i.e. 1.0.0-[something]), allowing us to distinguish good release candidates (off master via TeamCity) from dangerous builds that should never be released to a real customer.

The abomination of a Powershell build framework allows for most of this sort of stuff, but assumes that a .NET style AssemblyInfo.cs file will exist somewhere in the source.

At the end of the day, we decided to just include such a file for ease of use, and then propagate that version generated via the script into the Delphi executables through means that I am currently unfamiliar with.

Finally, all builds automatically tag the appropriate commit in Git, but that’s pretty much built into TeamCity anyway, so barely worth mentioning.


Like I said at the start of the post, if you don’t have an automated build process, you’re definitely doing it wrong.

I managed to summarise the whole “lets construct a build process” journey into a single, fairly lightweight blog post, but a significant amount of work went into it over the course of a few months. I was only superficially involved (as is mostly the case these days), so I have to give all of the credit to my colleagues.

The victory that this build process represents cannot be understated though, as it will form a solid foundation for everything to come.

A small step in the greater scheme of things, but I’m sure everyone knows the quote at this point.


I don’t think I’ve ever had a good experience dealing with dates, times and timezones in software.

If its not some crazy data issue that you don’t notice until its too late, then its probably the confusing nature of the entire thing that leads to misunderstandings and preventable errors. Especially when you have to include Daylight Savings into the equation, which is just a stupid and unnecessary complication.

Its all very annoying.

Recently we found ourselves wanting to know what timezones our users were running in, but of course, nothing involving dates and times is ever easy.

Time Of My Life

Whenever we migrate user data from our legacy platform into our cloud platform, we have to take into account the timezone that the user wants to operate in. Generally this is set at the office level (i.e. the bucket that segregates one set of user profiles and data from the rest), so we need to know that piece of information right at the start of the whole process, when the office is created.

Now, to be very clear, what we need to know is the users preferred timezone, not their current offset from UTC. The offset by itself is not enough information, because we need to be able to safely interpret dates and times both in the past (due to the wealth of historical data we bring along) and in the future (things that are scheduled to happen, but haven’t happened yet). A timezone contains enough information for us to interact with any date and time, and includes a very important piece of information.

If/when daylight savings is in effect and what sort of adjustment it makes to the normal offset from UTC.

Right now we require that the user supply this information as part of the migration process. By itself, its not exactly a big deal, but we want to minimise the amount of involvement we require from the user in order to reduce the amount of resistance that the process can cause. The migration should be painless, and anything we can do to make it so is a benefit in the long run.

We rely on the user here because the legacy data doesn’t contain any indication as to what timezone it should be interpreted in.

So we decided to capture it.

No Time To Explain

The tricky part of capturing the timezone is that there are many users/machines within a client site that access the underlying database, and each one might not be set to the same timezone. Its pretty likely for them all to be set the same way, but we can’t guarantee it, so we need to capture information about every user who is actively interacting with the software.

So the plan is straightforward; when the user logs in, record some information in the database describing the timezone that they are currently using. Once this information exists, we can sync it up through the normal process and then use it within the migration process. If all of the users within an office agree, we can just set the timezone for the migration. If there are conflicts we have to revert back to asking the user.

Of course, this is where things get complicated.

The application login is written in VB6, and lets be honest, is going to continue to be written in VB6 until the heat death of the universe.

That means WIN32 API calls.

The one in particular that we need is GetTimeZoneInformation, which will fill out the supplied TIME_ZONE_INFORMATION structure when called and return a value indicating the usage of daylight savings for the timezone information specified.

Seems pretty straightforward in retrospect, but it was a bit of a journey to get there.

At first we thought that we had to use the *Bias fields to determine whether or not daylight savings was in effect, but that in itself  brought about by a misunderstanding, because we don’t actually care if daylight savings is in effect right now, just what the timezone is (because that information is encapsulated in the timezone itself). It didn’t help that we were originally outputting the currentoffset instead of the timezone as well.

Then, even when we knew we had to get at the timezone, it still wasn’t clear which of the two fields (StandardName or DaylightName) to use. That is, until we looked closer at the documentation of the function and realised that the return value could be used to determined which field we should refer to.

All credit to the person who implemented this (a colleague of mine), who is relatively new to the whole software development thing, and did a fine job, once we managed to get a clear idea of what we actually had to accomplish.

Its Time To Stop

At the end of the day we’re left with something that looks like this.

    Bias As Long
    StandardName(0 To 63) As Byte
    StandardDate As SYSTEMTIME
    StandardBias As Long
    DaylightName(0 To 63) As Byte
    DaylightDate As SYSTEMTIME
    DaylightBias As Long
End Type

Private Function GetCurrentTimeZoneName() As String


    If GetTimeZoneInformation(tzi) = 0 Then
        GetCurrentTimeZoneName = Replace(tzi.StandardName, Chr(0), "")
        GetCurrentTimeZoneName = Replace(tzi.DaylightName, Chr(0), "")
    End If
End Function

That function for extracting the timezone name is then used inside the part of the code that captures the set of user information that we’re after and stores it in the local database. That code is not particularly interesting though, its just a VB6 ADODB RecordSet.

Hell, taken in isolation, and ignoring the journey that it took to get here, the code above isn’t all that interesting either.


With the required information being captured into the database, all we have to do now is sync it up, like any other table.

Of course, we have to wait until our next monthly release to get it out, but that’s not the end of the world.

Looking back, this whole dance was less technically challenging and more just confusing and hard to clearly explain and discuss.

We got there in the end though, and the only challenge left now belongs to another team.

They have to take the timezone name that we’re capturing and turn it into a Java timezone/offset, which is an entirely different set of names that hopefully map one-to-one.

Since the situation involves dates and times though, I doubt it will be that clean.