0 Comments

I expect accounting software to make some pretty convincing guarantees about the integrity of its data over time.

From my experience, such software generally restricts the users capability to change something once it has been entered. Somewhat unforgiving of innocent mistakes (move money to X, whoops, now you’ve got to move to back and there is a full audit trail of your mistake), but it makes for a secure system in the long run.

Our legacy software is very strict about maintaining the integrity of its transactional history, and it has been for a very long time.

Except when you introduce the concept of database restores, but that’s not a topic for this blog post.

Nothing is perfect though, and a long history of development by a variety of parties (some highly competent, some….not) has lead to a complicated system that doesn’t play by its own rules every now and then.

Its like a reverse Butterfly Effect, changing the present can unfortunately change the past.

Its All Wrong

A natural and correct assumption about any reports that come out of a piece of accounting software, especially one that focuses on transactions, is that it shouldn’t matter when you look at the report (today, tomorrow, six months from now), if you’re looking at data in the past, it shouldn’t be changing.

When it comes to the core transactional items (i.e. “Transferred $200 to Bob”) we’re good. Those sorts of things are immutable at the time when they occur and it doesn’t matter when you view the data, its always the same.

Being that this is a blog post, suspiciously titled “Protecting the Timeline”, I think you can probably guess that something is rotten in the state of Denmark.

While the core transactional information is unimpeachable, sometimes there is meta information attached to a transaction with less moral integrity. For example, if the transaction is an EFT payment exiting the system, it needs to record bank account details to be compliant with legislation (i.e. “Transferred $200 to Bob (EFT: 123-456, 12346785)”).

Looking at the system, its obvious that the requirement to capture this additional information came after the original implementation, and instead of capturing the entire payload when the operation is executed, the immutable transaction is dynamically linked to the entities involved and the bank account details (or equivalent) are loaded from the current state of the entity whenever a report is created.

So we know unequivocally that the transaction was an EFT transaction, but we don’t technically know which account the transfer targeted. If the current details change, then a re-printed report will technically lie.

Freeze Frame

The solution is obvious.

Capture all of the data at the time the operation is executed, not just some of it.

This isn’t overly difficult from a technical point of view, just hook into the appropriate places, capture a copy of the current data and store it somewhere safe.

Whenever the transactions are queried (i.e. in the report), simply load the the same captured data and present it to the user.

Of course, if the requirements change again in the future (and we need to show additional information like Bank Name or something), then we will have to capture that as well, and all previous reports will continue to show the data as it was before the new requirement. That’s the tradeoff, you can’t capture everything, and whatever you don’t capture is never there later.

But what about the literal mountain of data that already exists with no captured meta information?

Timecop!

There were two obvious options that we could see to deal with the existing data:

  1. Augment the reporting/viewing logic such that it would use the captured data if if existed, but would revert back to the old approach if not.
  2. Rewrite history using current information and “capture” the current data, then just use the captured data consistently (i.e. in reports and whatnot).

The benefit of option one is that we’re just extending the logic that has existed for years. When we have better data we can use that, but if not, we just fall back to old faithful. The problem here is one of complication, as any usages now need to do two things, with alternate code paths. We want to make the system simpler over time (and more reliable), not harder to grok. Also, doing two operations instead of one, combined with the terrible frameworks in use (a positively ancient version of Crystal Reports) led to all sorts of terrible performance problems.

Option two basically replicates the logic in option one, but executes it only once when we distribute the upgrade to our users, essentially capturing data at that point in time, which then becomes immutable. From that point forward everything is simple, just use the new approach, and all of the old data is the same as it would have been if the reports had of been printed out normally.

If you couldn’t guess, we went with option two.

Conclusion

What we were left with was a more reliable reporting system, specifically focused around the chronological security of data.

Also, I’m pretty sure I made up the term “chronological security”, but it sounds cool, so I’m pretty happy.

I honestly don’t know what led to the original decision to not capture key parts of the transaction in an immutable fashion, and with hindsight its easy for me to complain about it. I’m going to assume the group (or maybe even individual) that developed the feature simply did not think through the ramifications of the implementation over time. Making good software requires a certain level of care and I know for a fact that that level of care was not always present for our long-suffering legacy software.

We’re better now, but that’s still a small slice of the overall history pie, and sometimes we build on some very shaky foundations.

0 Comments

If you’re writing line of business software of almost any description, you probably need to generate reports.

In the industry that I’m current working in, these reports are typically financial, showing the movement of money relating to a trust account.

Our legacy product has been around for a very very long time, so it has all the reports you could need. Unfortunately, they are generated using an ancient version of crystal reports, which I’m pretty sure is possessed by a demonic entity, so they can be a bit of a nightmare to maintain.

For our new cloud product, I’m not entirely familiar with how we deal with reports, but I think that they are generated using HTML and then converted to PDF. Its enough of a feature that there’s a whole reporting subsystem dedicated to the task).

Unfortunately, our most recent development efforts in the legacy product fall somewhere in the middle between terrifying ancient evil and relatively modern report generation processes.

The crystal reports component is VB6 native, and all of our new functionality is written in C# (using WPF for the view layer). We could call back into the VB6 to generate a report, but honestly, I don’t want to touch that with a ten-foot pole. We can’t easily leverage the HTML/PDF generation capabilities of the new cloud product either, as it was never built to be reused by an entirely different domain.

As a result, we’ve mostly just shied away from doing reports as a part of new features.

Our latest feature is a little different though, as it is an automated receipting solution, and a report of some description is no longer optional.

Last responsible moment and all that.

Forming An Opinion

If you’re trying to build out a report in WPF, you’d think that there would be a component there all ready to go.

You’d be mostly wrong, at least as far as native WPF is concerned. There are a few bits and pieces around, but nothing particularly concrete or well put together (at least as far as we could determine anyway).

Instead, most people recommend that you use the Windows Forms Report Viewer, and the systems that it is built on.

We poked at this for a little while, but it just seemed so…archaic and over complicated. All we wanted was to take a View Model describing the report (of our design) and Bind it, just like we would for a normal WPF view.

Enter the FlowDocument.

In The Flow

I’ll be honest, I didn’t actually do the work to build a report out in WPF using FlowDocuments, so most of what I’m writing here is second hand knowledge. I didn’t have to live through the pain, but the colleague that did it assures me that it was quite an ordeal.

At their core, FlowDocuments allow you to essentially create a document (like a newspaper article or something similar) in WPF. They handle things like sizing the content to the available area, scrolling and whatnot, all with the capability to render normal XAML controls alongside textual constructs (paragraphs, images, etc).

There are a few things that they don’t do out of the box though:

  • Pagination when printing. The default paginator is mostly concerned with making pages for a screen (rather than a printed document), and doesn’t allow for headers or footers at all. As a result, we implemented a custom DocumentPaginator that did what we needed it to do.
  • Templated content repetition. If you’re using WPF, and MVVM, you’re probably familiar with the ItemsControl (or its equivalents). If you want to do something similar in a FlowDocument though (i.e. bind to a list of things), you’ll need to put together a custom templating system. This is relevant to us because our report is mostly tabular, so we just wanted a relatively simple repeater.

With those bits and pieces out of the way though, what you get is a decent component that you can use on top of a view model that displays the report to the user in the application (i.e. scrolling, text selection, etc) with the capability to print it out to any of the printers they have available.

Its not exactly a ground breaking advance in the high-tech field of report generation, but it gets the job done without any heavyweight components or painful integrations.

Conclusion

I’m sure there are hardcore reporting components out there that are built in pure WPF and do exactly what we want, but we just couldn’t find them.

Instead, we settled for knocking together some extensions to the existing FlowDocument functionality that accomplished exactly what we needed and no more.

With a little bit of effort, we could probably make them more generic and reusable, and I might even look into doing that at some point in the future, but to be honest now that we’ve done the one report that we needed to do, we’ll probably mostly forget about it.

Until the next time of course, then we’ll probably wonder why we didn’t make it generic and reusable in the first place.