0 Comments

I expect accounting software to make some pretty convincing guarantees about the integrity of its data over time.

From my experience, such software generally restricts the users capability to change something once it has been entered. Somewhat unforgiving of innocent mistakes (move money to X, whoops, now you’ve got to move to back and there is a full audit trail of your mistake), but it makes for a secure system in the long run.

Our legacy software is very strict about maintaining the integrity of its transactional history, and it has been for a very long time.

Except when you introduce the concept of database restores, but that’s not a topic for this blog post.

Nothing is perfect though, and a long history of development by a variety of parties (some highly competent, some….not) has lead to a complicated system that doesn’t play by its own rules every now and then.

Its like a reverse Butterfly Effect, changing the present can unfortunately change the past.

Its All Wrong

A natural and correct assumption about any reports that come out of a piece of accounting software, especially one that focuses on transactions, is that it shouldn’t matter when you look at the report (today, tomorrow, six months from now), if you’re looking at data in the past, it shouldn’t be changing.

When it comes to the core transactional items (i.e. “Transferred $200 to Bob”) we’re good. Those sorts of things are immutable at the time when they occur and it doesn’t matter when you view the data, its always the same.

Being that this is a blog post, suspiciously titled “Protecting the Timeline”, I think you can probably guess that something is rotten in the state of Denmark.

While the core transactional information is unimpeachable, sometimes there is meta information attached to a transaction with less moral integrity. For example, if the transaction is an EFT payment exiting the system, it needs to record bank account details to be compliant with legislation (i.e. “Transferred $200 to Bob (EFT: 123-456, 12346785)”).

Looking at the system, its obvious that the requirement to capture this additional information came after the original implementation, and instead of capturing the entire payload when the operation is executed, the immutable transaction is dynamically linked to the entities involved and the bank account details (or equivalent) are loaded from the current state of the entity whenever a report is created.

So we know unequivocally that the transaction was an EFT transaction, but we don’t technically know which account the transfer targeted. If the current details change, then a re-printed report will technically lie.

Freeze Frame

The solution is obvious.

Capture all of the data at the time the operation is executed, not just some of it.

This isn’t overly difficult from a technical point of view, just hook into the appropriate places, capture a copy of the current data and store it somewhere safe.

Whenever the transactions are queried (i.e. in the report), simply load the the same captured data and present it to the user.

Of course, if the requirements change again in the future (and we need to show additional information like Bank Name or something), then we will have to capture that as well, and all previous reports will continue to show the data as it was before the new requirement. That’s the tradeoff, you can’t capture everything, and whatever you don’t capture is never there later.

But what about the literal mountain of data that already exists with no captured meta information?

Timecop!

There were two obvious options that we could see to deal with the existing data:

  1. Augment the reporting/viewing logic such that it would use the captured data if if existed, but would revert back to the old approach if not.
  2. Rewrite history using current information and “capture” the current data, then just use the captured data consistently (i.e. in reports and whatnot).

The benefit of option one is that we’re just extending the logic that has existed for years. When we have better data we can use that, but if not, we just fall back to old faithful. The problem here is one of complication, as any usages now need to do two things, with alternate code paths. We want to make the system simpler over time (and more reliable), not harder to grok. Also, doing two operations instead of one, combined with the terrible frameworks in use (a positively ancient version of Crystal Reports) led to all sorts of terrible performance problems.

Option two basically replicates the logic in option one, but executes it only once when we distribute the upgrade to our users, essentially capturing data at that point in time, which then becomes immutable. From that point forward everything is simple, just use the new approach, and all of the old data is the same as it would have been if the reports had of been printed out normally.

If you couldn’t guess, we went with option two.

Conclusion

What we were left with was a more reliable reporting system, specifically focused around the chronological security of data.

Also, I’m pretty sure I made up the term “chronological security”, but it sounds cool, so I’m pretty happy.

I honestly don’t know what led to the original decision to not capture key parts of the transaction in an immutable fashion, and with hindsight its easy for me to complain about it. I’m going to assume the group (or maybe even individual) that developed the feature simply did not think through the ramifications of the implementation over time. Making good software requires a certain level of care and I know for a fact that that level of care was not always present for our long-suffering legacy software.

We’re better now, but that’s still a small slice of the overall history pie, and sometimes we build on some very shaky foundations.