I’ve been using MVVM as a pattern for UI work for a while now, mostly because of WPF. Its a solid pattern and while I’ve not really delved into the publicly available frameworks (Prism, Caliburn.Micro, etc) I have put together a few reusable bits and pieces to make the journey easier.

One of those bits and pieces is the ability to perform work in the background, so that the UI remains responsive and usable while other important things are happening. This usually manifests as some sort of refresh or busy indicator on the screen after the user elects to do something complicated, but the important part is that the screen itself does not become unresponsive.

People get antsy when software “stops responding” and tend to murder it with extreme prejudice.

Now, the reusable components are by no means perfect, but they do get the job done.

Except when they don’t.

Right On Schedule

The framework itself is pretty bare bones stuff, with a few simple ICommand implementations and some view model base classes giving easy access to commonly desired functions.

The most complex part is the build in support to easily do background work in a view model while leaving the user experience responsive and communicative. The core idea is to segregate the stuff happening in the background from the stuff happening in the foreground (which is where all the WPF rendering and user interaction lives) using Tasks and TaskSchedulers from the TPL (Task Parallel Library), while also helping to manage some state to communicate what was happening to the user (like busy indicators).

Each view model is be responsible for executing some long running operation (probably started from a command), and then deciding what should happen when that operation succeeds, fails or is cancelled.

In order to support this segregation, the software takes a dependency on three separate task schedulers; one for the background (which is just a normal ThreadPoolTaskScheduler), one for the foreground (which is a DispatcherTaskScheduler or something similar) and one for tasks that needed to be scheduled on a regular basis (another ThreadPoolTaskScheduler).

This dependency injection allows for those schedulers to be overridden for testing purposes, so that they executed completely synchronously or could be pumped at will as necessary in tests.

It all worked pretty great until we started really pushing it hard.

Schedule Conflict

Our newest component to use the framework did a huge amount of work in the background. Not only that, because of the way the interface was structured, it pretty much did all of the work at the same time (i.e. as soon as the screen was loaded), in order to give the user a better experience and minimise the total amount of time spent waiting.

From a technical standpoint, the component needed to hit both a local database (not a real problem) and a remote API (much much slower), both of which are prime candidates for background work due to their naturally slow nature. Not a lot of CPU intensive work though, mostly just DB and API calls.

With 6-10 different view models all doing work in the background, it quickly became apparent that we were getting some amount of contention for resources, as not all Tasks were being completed in a reasonable amount of time. Surprisingly hard to measure, but it looked like The Tasks manually scheduled via the TaskSchedulers were quite expensive to run, and the ThreadPoolTaskSchedulers could only run so much at the same time due to the limits on parallelization and the number of threads that they could have running at once.

So that sucked.

As a bonus annoyance, the framework did not lend itself to usage of async/await at all. It expected everything to be synchronous, where the “background” nature of the work was decided by virtue of where it was executed. Even the addition of one async function threw the whole thing into disarray, as it became harder to reason about where the work was actually being executed.

In the grand scheme of things, async/await is still relatively new (but not that new, it was made available in 2013 after all), but its generally considered a better and less resource intensive way to ensure that blocking calls (like HTTP requests, database IO, file IO and so on) are not causing both the system and the user to wait unnecessarily. As a result, more and more libraries are being built with async functions, sometimes not even exposing a synchronous version at all. Its somewhat difficult to make an async function synchronous to, especially if you want to avoid potential deadlocks.

With those limitations noted, we had to do something.

Why Not Both?

What we ended up doing was allowing for async functions to be used as part of the background work wrappers inside the base view models. This retained the managed “busy” indicator functionality and the general programming model that had been put into place (i.e. do work, do this on success, this on failure, etc).

Unfortunately what it also did was increase the overall complexity of the framework.

It was now much harder to reason about which context things were executing on, and while the usage of async functions was accounted for in the background work part of the framework, it was not accounted for in either the success or error paths.

This meant that is was all too easy to use an async function in the wrong context, causing a mixture of race conditions (where the overarching call wasn’t aware that part of itself was asynchronous) or bad error handling (where a developer had marked a function as async void to get around the compiler errors/warnings).

Don’t get me wrong, it all worked perfectly fine, assuming you knew to avoid all of the things that would make it break.

The tests got a lot more flaky though, because while its relatively easy to override TaskSchedulers with synchronous versions, its damn near impossible to force async functions to execute synchronously.

Sole Survivor

Here’s where it all gets pretty hypothetical, because the solution we actually have right now is the one that I just wrote about (the dual natured, overly complex abomination) and its causing problems on and off in a variety of ways.

A far better model is to incorporate async/await into the fabric of the framework, allowing for its direct usage and doing away entirely with the segmentation logic that I originally put together (with the TaskSchedulers and whatnot).

Stephen Cleary has some really good articles in MSDN magazine about this sort of stuff (being async ViewModels and supporting constructs), so I recommend reading them all if you’re interested.

At a high level, if we expose the fact that the background work is occurring asynchronously (view async commands and whatnot), then not only do we make it far easier to do work in the background (literally just use the standard async/await constructs), but it becomes far easier to handler errors in a reliable way, and the tests become easier too, because they can simply be async themselves (which all major unit testing frameworks support).

It does represent a significant refactor though, which is always a bit painful.


I’m honestly still not sure what the better approach is for this sort of thing

Async/await is so easy to use at first glance, but has a bunch of complexity and tripwires for the unwary. Its also something of an infection, where once you use it even a little bit, you kind of have to push it through everything in order for it to work properly end-to-end. This can be problematic for an existing system, where you want to introduce it a bit at a time.

On the other side, the raw TPL stuff that I put together is much more complex to use, but is relatively shallow. It much easier to reason about where work is actually happening and relatively trivial tocompletely change the nature of the application for testing purposes. Ironically enough, the ability to easily change from asynchronous background workers to a purely synchronous execution is actually detrimental in a way, because it means your tests aren’t really doing the same thing as your application will, which can mask issues.

My gut feel is to go with the newer thing, even though it feels a bit painful.

I think the pain is a natural response to something new though, so its likely to be a temporary thing.

Change is hard, you just have to push through it.


If you’re writing line of business software of almost any description, you probably need to generate reports.

In the industry that I’m current working in, these reports are typically financial, showing the movement of money relating to a trust account.

Our legacy product has been around for a very very long time, so it has all the reports you could need. Unfortunately, they are generated using an ancient version of crystal reports, which I’m pretty sure is possessed by a demonic entity, so they can be a bit of a nightmare to maintain.

For our new cloud product, I’m not entirely familiar with how we deal with reports, but I think that they are generated using HTML and then converted to PDF. Its enough of a feature that there’s a whole reporting subsystem dedicated to the task).

Unfortunately, our most recent development efforts in the legacy product fall somewhere in the middle between terrifying ancient evil and relatively modern report generation processes.

The crystal reports component is VB6 native, and all of our new functionality is written in C# (using WPF for the view layer). We could call back into the VB6 to generate a report, but honestly, I don’t want to touch that with a ten-foot pole. We can’t easily leverage the HTML/PDF generation capabilities of the new cloud product either, as it was never built to be reused by an entirely different domain.

As a result, we’ve mostly just shied away from doing reports as a part of new features.

Our latest feature is a little different though, as it is an automated receipting solution, and a report of some description is no longer optional.

Last responsible moment and all that.

Forming An Opinion

If you’re trying to build out a report in WPF, you’d think that there would be a component there all ready to go.

You’d be mostly wrong, at least as far as native WPF is concerned. There are a few bits and pieces around, but nothing particularly concrete or well put together (at least as far as we could determine anyway).

Instead, most people recommend that you use the Windows Forms Report Viewer, and the systems that it is built on.

We poked at this for a little while, but it just seemed so…archaic and over complicated. All we wanted was to take a View Model describing the report (of our design) and Bind it, just like we would for a normal WPF view.

Enter the FlowDocument.

In The Flow

I’ll be honest, I didn’t actually do the work to build a report out in WPF using FlowDocuments, so most of what I’m writing here is second hand knowledge. I didn’t have to live through the pain, but the colleague that did it assures me that it was quite an ordeal.

At their core, FlowDocuments allow you to essentially create a document (like a newspaper article or something similar) in WPF. They handle things like sizing the content to the available area, scrolling and whatnot, all with the capability to render normal XAML controls alongside textual constructs (paragraphs, images, etc).

There are a few things that they don’t do out of the box though:

  • Pagination when printing. The default paginator is mostly concerned with making pages for a screen (rather than a printed document), and doesn’t allow for headers or footers at all. As a result, we implemented a custom DocumentPaginator that did what we needed it to do.
  • Templated content repetition. If you’re using WPF, and MVVM, you’re probably familiar with the ItemsControl (or its equivalents). If you want to do something similar in a FlowDocument though (i.e. bind to a list of things), you’ll need to put together a custom templating system. This is relevant to us because our report is mostly tabular, so we just wanted a relatively simple repeater.

With those bits and pieces out of the way though, what you get is a decent component that you can use on top of a view model that displays the report to the user in the application (i.e. scrolling, text selection, etc) with the capability to print it out to any of the printers they have available.

Its not exactly a ground breaking advance in the high-tech field of report generation, but it gets the job done without any heavyweight components or painful integrations.


I’m sure there are hardcore reporting components out there that are built in pure WPF and do exactly what we want, but we just couldn’t find them.

Instead, we settled for knocking together some extensions to the existing FlowDocument functionality that accomplished exactly what we needed and no more.

With a little bit of effort, we could probably make them more generic and reusable, and I might even look into doing that at some point in the future, but to be honest now that we’ve done the one report that we needed to do, we’ll probably mostly forget about it.

Until the next time of course, then we’ll probably wonder why we didn’t make it generic and reusable in the first place.


With my altogether too short break out of the way, its time for another post about something software related.

This weeks topic? Embedding websites into windows desktop applications for fun and profit.

Not exactly the sexiest of topics, but over the years I’ve run into a few situations where the most effective solution to a problem involved embedding a website into another application. Usually an external party has some functionality that they want your users to access and they’ve put a bunch of effort into building a website to do just that. If you’re unlucky, they haven’t really though ahead and the website is the only option for integration (what’s an API?), but all is not lost.

Real value can still be delivered to the users who want access to this amazing thing you have no control over.

So Many Options!

I’ve been pretty clear in the posts on this blog that one of things my team is responsible for is a legacy VB6 desktop application. Now, its still being actively maintained and extended, so its not dead, but we try not to write VB6 when we can avoid it. Pretty much any new functionality we implement is written in C#, and if we need to present something visual to the user we default to WPF.

Hence, I’m going to narrow the scope of this post down to those technologies, with some extra information from a specific situation we ran into recently.

Right at the start of the whole “hey, lets jam a website up in here” though process, the first thing you need to do is decide whether or not you can “integrate” by just shelling out to the website using the current system default browser.

If you can, for the love of all that is good and holy, do it. You will save yourself a whole bunch of pain.

Of course, if you need to provide a deeper integration than that, then you’re going to have to delve into the wonderful world of WPF web browser controls.

Examples include:

There are definitely other offerings, but I don’t know what they are. I can extrapolate on the first two (because I’ve used them both in anger), but I can’t really talk about the third one. I only included it because I’ve heard about it specifically.

Chrome Dome

CEFSharp is a .NET (both WPF and WinForms) wrapper around the Chromium Embedded Framework, and to be honest, its pretty great.

In fact, I’ve already written a post about using CEFSharp in a different project. The goal there was to host a website within a desktop application, but there were some tricksy bits around supplying data to the website directly (via shared Javascript context), and we actually owned the website being embedded, so we had a lot of flexibility around making it do exactly what we needed it to do.

The CEFSharp library is usually my first port of call when it comes to embedding a website in a desktop application.

Unfortunately, we we tried to leverage CEFSharp.WPF into our VB6/C# franken-application we ran into some seriously weird issues.

Our legacy application is at its core VB6. All of the .NET code is triggered from the VB6 via a COM interop, which essentially amounts to a message bus with handlers on the .NET side. VB6 raises event, .NET handles it. Due to the magic of COM, this means that you can pretty much do all the .NET things, including using the various UI frameworks like WinForms and WPF. There is some weirdness with windows and who owns them, but all in all it works pretty well.

To get to the point, we put a CEFSharp.WPF browser into a WPF screen, triggered it from VB6 and from that point forward the application would crash randomly with Access Violations any time after the screen was closed.

We tried the obvious things, like controlling the lifetime of the browser control ourselves (and disposing of it whenever the window closed), but in the end we didn’t get to the bottom of the problem and gave up on CEFSharp. Disappointing but not super surprising, given that that sort of stuff is damn near impossible to diagnose, especially when you’re working in a system built by stitching together a bunch of technological corpses.


Then there is the built-in WPF WebBrowser control, which accomplishes basically the same thing.

Why not go with this one first? Surely built in components are going to be superior and better supported compared to third party components?

Well, for one, its somewhat dependent on Internet Explorer, which can lead to all sorts of weirdness.

A good example said weirdness if the following issue we encountered:

  • You try to load a HTTPS website using TL 1.2 through the WebBrowser control
  • It doesn’t work, giving a “page cannot be loaded error” but doesn’t tell you why
  • You load the page in Chrome and it works fine
  • You load the page in Internet Explorer and it tells you TLS 1.2 is not enabled
  • You go into the Internet Explorer settings and enable support for TLS 1.2
  • Internet Explorer works
  • Your application also magically works

The second pertinent piece of weirdness relates specifically to the controls participation in the WPF layout and rendering engine.

The WebBrowser control does not follow the same rendering logic as a normal WPF control, likely because its just a wrapper around something else. It works very similar to a WinForms control hosted in WPF, which is a nice way of saying it works pretty terribly.

For example, it renders on top of everything regardless of how you think you organised it, which can lead to all sorts of strange visual artefacts if you tend to use the Z-axis to create layered interfaces.


With CEFSharp causing mysterious Access Violations that we could not diagnose, the default WPF WebBrowser was our only choice. We just had to be careful with when and how we rendered it.

Luckily, the website we needed to use was relatively small and simple (it was a way for us to handle sensitive data in a secure way), so while it was weird and ugly, the default WebBrowser did the job. It didn’t exactly make it easy to craft a nice user experience, but a lot of the pain we experienced there was more the fault of the website itself than the integration.

That’s a whole other story though.

In the end, if you don’t have a horrifying abomination of technologies like we do, you can probably just use CEFSharp. Its solid, well supported and has heaps of useful features, assuming you handle it with a bit of care.


Its time to step away from web services, log aggregation and AWS for a little while.

Its time to do some UI work! Not HTML though unfortunately, Windows desktop software.

The flagship application that my team maintains is an old (15+ years) VB6 application. It brings in a LOT of money, but to be honest, its not the best maintained piece of software I’ve ever seen. Its not the worst either, which is kind of sad. Like most things it falls somewhere in the middle. More on the side of bad than good though, for sure.

In an attempt to keep the application somewhat current, it has a .NET component to it. I’m still not entirely sure how it works, but the VB6 code uses COM to call into some .NET functionality, primarily through a message bus (and some strongly typed JSON messages). Its pretty clever actually, even if it does lead to some really strange threading issues from time to time.

Over the years, good sized chunks of new functionality have been developed in .NET, with the UI in Windows Forms.

Windows Forms is a perfectly fine UI framework, and don’t let anyone tell you different. Sure it has its flaws and trouble spots, but on the whole, it works mostly as you would expect it to, and it gets the job done. The downside is that most Windows Forms UI’s look the same, and you often end up with business logic being tightly coupled to the UI. When your choice is Windows Forms or VB6 though, well, its not really a choice.

For our most recent project, we wanted to try something new. Well, new for the software component anyway, certainly not new to me, some of the other members of the team, or reality in general.


Present that Framework

I’m not going to go into too much detail about WPF, but its the replacement for Windows Forms. It focuses on separating the actual presentation of the UI from the logic that drives it, and is extremely extensible. Its definitely not perfect, and the first couple of versions of it were a bit rough (Avalon!), but its in a good spot now.

Personally, I particularly like how it makes using the MVVM (Model-View-ViewModel) pattern easy, with its support for bindings and commands (among other things). MVVM is a good pattern to strive for when it comes to developing applications with complicated UI’s, because you can test your logic independent of the presentation. You do still need to test it all together obviously, because bindings are code, and you will make mistakes.

I was extremely surprised when we tried to incorporate a WPF form into the VB6 application via the .NET channels it already had available.

Mostly because it worked.

It worked without some insane workaround or other crazy thing. It just worked.

Even showing a dialog worked, which surprised me, because I remember having a huge amount of trouble with that when I was writing a mixed Windows Form/WPF application.

I quickly put together a fairly primitive framework to support MVVM (no need to invest into one of the big frameworks just yet, we’re only just starting out) and we built the new form up to support the feature we needed to expose.

That was a particularly long introduction, but what I wanted to talk about in this post was using design time variables in WPF to make the designer actually useful.

Intelligent Design

The WPF designer is…okay.

Actually, its pretty good, if a little bit flakey from time to time.

The problem I’ve encountered every time I’ve used MVVM with WPF is that a lot of the behaviour of my UI component is dependent on its current data context. Sure, it has default behaviour when its not bound to anything, but if you have error overlays, or a message channel used to communicate to the user, or expandable bits within an Items Control whose items have their own data templates, it can be difficult to visualise how everything looks just from the XAML.

It takes time to compile and run the application as well (to see it running in reality), even if you have a test harness that opens the component you are working on directly.

Tight feedback loops are easily one of the most important parts of developing software quickly, and the designer is definitely the quickest feedback loop for WPF by far.

Luckily, there are a number of properties that you can set on your UI component which will only apply at design time, and the most useful one is the design time DataContext.

This property, when applied, will set the DataContext of your component when it is displayed in the designer, and assuming you can write appropriate classes to interface with it, gives you a lot of power when it comes to viewing your component in its various states.

Contextual Data

I tend towards a common pattern when I create view models for the designer. I will create an interface for the view model (based on INotifyPropertyChanged), a default implementation (the real view model) and a view model creator or factory, specifically for the purposes of the designer. They tend to look like this (I’ve included the view model interface for completeness):

namespace Solavirum.Unspecified.ViewModel
    public interface IFooViewModel : INotifyPropertyChanged
        string Message { get; }

namespace Solavirum.Unspecified.View.Designer
    public class FooViewModelCreator
        private static readonly _Instance = new FooViewModelCreator();
        public static IFooViewModel ViewModel
                return _Instance.Create();
        public IFooViewModel Create()
            return new DummyFooViewModel
                Message = "This is a message to show in the designer."
        private class DummyFooViewModel : BaseViewModel, IFooViewModel
            public string Message { get; set; }

As you can see, it makes use of a private static instance of the creator, but it doesn’t have to (It’s just for caching purposes, so it doesn’t have to be recreated all the time, its probably not necessary). It exposes a readonly view model property which just executes the Create function, and can be bound to in XAML like so:

    d:DataContext="{x:Static Designer:FooViewModelCreator.ViewModel}">
        <TextBlock Text="{Binding Message}" />

With this hook into the designer, you can do all sorts of crazy things. Take this creator for example:

namespace Solavirum.Unspecified.View.Designer
    public class FooViewModelCreator
        private static readonly FooViewModelCreator _Instance = new FooViewModelCreator();

        public static IFooViewModel ViewModel
            get { return _Instance.Create(); }

        public IFooViewModel Create()
            var foreground = TaskScheduler.Current;
            var messages = new List<string>()
                "This is the first message. Its short.",
                "This is the second message. Its quite a bit longer than the first message, and is useful for determining whether or not wrapping is working correctly."

            var vm = new DummyFooViewModel();

            var messageIndex = 0;
                    () =>
                        while (true)
                            var newMessage = messages[messageIndex];
                            messageIndex = (messageIndex + 1)%messages.Count;
                                    () => vm.Message = newMessage,


            return vm;

        private class DummyFooViewModel : IFooViewModel
            private string _Message;

            public string Message
                get { return _Message; }
                    _Message = value;

            public event PropertyChangedEventHandler PropertyChanged;

            private void RaisePropertyChanged(string propertyName)
                var handlers = PropertyChanged;
                if (handlers != null)
                    handlers(this, new PropertyChangedEventArgs(propertyName));

It uses a long running task to change the state of the view model over time, so you can see it in its various states in the designer. This is handy if your view model exposes an error property (i.e. if it does a refresh of some sort, and you want to notify the user in an unobtrusive way when something bad happens) or if you just want to see what it looks like with varying amounts of text or something similar. Notice that it has to marshal the change onto the foreground thread (which in the designer is the UI thread), or the property changed event won’t be picked up by the binding engine.

Once you’ve setup the creator and bound it appropriately, you can do almost all of your UI work in the designer, which saves a hell of a lot of time. Of course you can’t verify button actions, tooltips or anything else that requires interaction (at least to my knowledge), but its still a hell of a lot better than starting the application every time you want to see if your screen looks okay.

Getting Inside

Now, because you are running code, and you wrote that code, it will have bugs. There’s no shame in that, all code has bugs.

Its hard to justify writing tests for designer specific support code (even though I have done it), because they don’t necessarily add a lot of value, and they definitely increase the time required to make a change.

Instead I mostly just focus on debugging when the designer view models aren’t working the way that I think they should.

In order to debug, you will need to open a second instance of Visual Studio, with the same project, and attach it to the XAML designer process (XDesProc). Now, since you have two instances of Visual Studio, you might also have two instances of the XAML designer process, but its not hard to figure out the right one (trial and error!). Once you’ve attached the process you can put breakpoints in your designer specific code and figure out where its going wrong.

I’ll mention it again below, but the app domain for the designer is a little bit weird, so sometimes it might not work at all (no symbols, breakpoints not being hit, etc). Honestly, I’m not entirely sure why, but a restart of both instances of Visual Studio, combined with a fresh re-compilation usually fixes that.


There are a few gotchas with the whole designer thing I’ve outlined above that are worth mentioning.

The first is that If you do not version your DLL appropriately (within Visual Studio, not just within your build server), you will run into issues where old versions of your view model are being bound into the designer. This is especially annoying when you have bugs, as it will quite happily continue to use the old, broken version. I think the designer only reloads your libraries when it detects a version change, but I can’t back that up with proof.

The solution is to make sure that your version changes every time when you compile, which honestly, you should be doing anyway. I’ve had success with just using the reliable 0.0.* version attribute in the assembly when the project is compiled as debug (so using an #IF DEBUG). You just have to make sure that whatever approach you use for versioning in your build server doesn’t clash with that.

The second gotcha is that the app domain for the designer is a bit…weird. For example, Ninject won’t automatically load its extension modules in the designer, you have to load them manually. For Ninject specifically, this is a fairly straightforward process (just create a DesignerKernel), but there are other issues as well.

Sometimes the designer just won’t run the code you want it to. Typically this happens after you’ve been working on it for a while, constantly making new builds of the view model creator. The only solution I’ve found to this is just to restart Visual Studio. I’m using Visual Studio 2013 Update 5, so it might be fixed/better in 2015, but I don’t know. Its not a deal breaker anyway, basically just be on the lookout for failures that look like they are definitely not the fault of your code, and restart Visual Studio before you start pulling your hair out.


I highly recommend going to the extra effort of creating view models that can be bound in your component at design time. Its a great help when you’re building the component, but it also helps you to validate (manually) whether or not your component acts as you would expect it to when the view model is in various states.

It can be a little bit difficult to maintain if your code is changing rapidly (breaking view models apart can have knock-on effects on the creator for example, increasing the amount of work required in order to accomplish a change), but the increase in development speed for UI components (which are notoriously fiddly anyway) is well worth it.

Its also really nice to see realistic looking data in your designer. It makes the component feel more substantial, like it actually might accomplish something, instead of being an empty shell that only fills out when you run the full application.