This post is a week later than I originally intended it to be, but I think we can all agree that terrifying and unforeseen technical problems are much more interesting than conference summaries.
Speaking of conference summaries!
DDD Brisbane 2018 was on Saturday December 1, and, as always, it was a solid event for a ridiculously cheap price. I continue to heartily recommend it to any developer in Brisbane.
In an interesting twist of fate I actually made notes this time, so I’m slightly better prepared to author this summarization.
Lets see if it makes a difference.
I Don’t Think That Word Means What You Think It Means
The first session of the day, and thus the keynote, was a talk on Domain Driven Design by Jessica Kerr.
Some pretty good points here about feedback/growth loops, and ensuring that when you establish a loop that you understand what indirect goal that you are actually moving towards. One of the things that resonated the most with me here was how most long term destinations are actually the acquisition of domain knowledge in the brains of your people. This sort of knowledge acquisition allows for a self-perpetuating success cycle, as the people building and improving the software actually understand the problems faced by the people who use it and can thus make better decisions on a day to day basis.
As a lot of that knowledge is often sequestered inside specific peoples heads, it reinforced to me that while the software itself probably makes the money in an organization, its the people who put it together that allow you to move forward. Thus retaining your people is critically important, and the cost of replacing a person who is skilled at the domain is probably much higher than you think it is.
A softer, less technical session, but solid all round.
The next session that I attended was about engineering for scale from a DDD staple, Andrew Harcourt.
Presented in his usual humorous fashion, it featured a purely hypothetical situation around a census website and the requirement that it be highly available. Something that would never happen in reality I’m sure.
Interestingly enough, it was a live demonstration as well, as he invited people to “attack” the website during the talk, to see if anyone could flood it with enough requests to bring it down. Unfortunately (fortunately?) no-one managed to do any damage to the website itself, but someone did managed to take out his Seq instance, which was pretty great.
Andrew went through a wealth of technical detail about how the website and underlying service was constructed (Docker, Kubernetes, Helm, React, .NET Core, Cloudflare) illustrating the breadth of technologies involved. He even did a live, zero-downtime deployment while the audience watched, which was impressive.
For me though, the best parts of the session were the items to consider when designing for scale, like:
- Actually understand your expected load profile. Taking the Australian Census as an example, it needed to be designed for 25 million requests over an hour (i.e. after dinner as everyone logged on to do the thing), instead of that load spread evenly across a 24 hour period. In my opinion, understanding your load profile is one of the more challenging aspects of designing for scale, as it is very easy to make a small mistake or misunderstanding that snowballs from that point forward.
- Make the system as simple as possible. A simpler system will have less overhead and generally be able to scale better than a complex one. The example he gave (his Hipster Census), contained a lot of technologies, but was conceptually pretty straight forward.
- Provide developers with a curated path to access the system. This was a really interesting one, as when he invited people to try and take down the website, he supplied a client library for connecting to the underlying API. What he didn’t make obvious though, was that the supplied client library had rate limiting built in, which meant that anyone who used it to try and flood the service was kind of doomed from that start. A sneaky move indeed. I think this sort of thing would be surprisingly effective even against actual attackers, as it would catch out at least a few of them.
- Do as little as possible up front, and as much as possible later on. For the census example specifically, Andrew made a good point that its more important to simply accept and store the data, regardless of its validity, because no-one really cares if it takes a few months to sort through it later.
- Generate access tokens and credentials through math, so that its much easier to filter out bad credentials later. I didn’t quite grok this one entirely, because there was still a whitelist of valid credentials involved, but I think that might have just been for demonstration purposes. The intent here is to make it easier to sift through the data later on for valid traffic.
As is to be expected from Andrew, it was a great talk with a fantastic mix of both new and shiny technology and real-world pragmatism.
The third session was from another DDD staple, Damien McLennan.
It was a harrowing tale of one mans descent into madness.
But seriously, it was a great talk about some real-world experiences using .NET Core and Docker to build out an improved web presence for Work180. Damien comes from a long history of building enterprisey systems (his words, not mine) followed by a chunk of time being entirely off the tools altogether and the completely different nature of the work he had to do in his new position (CTO at Work180) threw him for a loop initially.
The goal was fairly straightforward; replace an existing hosted solution that was not scaling well with something that would.
The first issue he faced was selecting a technology stack from the multitude that were available; Node, Python, Kotlin, .NET Core and so on.
The second issue he faced, once he had made the technology decision, was feeling like a beginner again as he learned the ins and outs of an entirely new thing.
To be honest, the best part of the session was watching a consummate industry professional share his experiences struggling through the whole process of trying a completely different thing. Not from a “ooooo, a train wreck” point of view though, because it wasn’t that at all. It was more about knowing that this is something that other people have gone through successfully, which can be really helpful when its something that you’re thinking about doing yourself.
Also, there was some cool tech stuff too.
To Be Continued
With three session summaries out of the way, I think this blog post is probably long enough.
Tune in next week for the thrilling conclusion!