0 Comments

Software Development as a discipline puts a bunch of effort into trying to minimise the existence and creation of bugs, but the reality is that its an investment/return curve that flattens off pretty quickly.

Early discovery of issues is critical. Remember, the cost to the business for a bug existing is never lower than it is at development time. The longer it has to fester, the worse its going to get.

Of course, when a bug is discovered, there are decisions to make around whether or not to fix it. For me, every bug that exists in a piece of software that might cause an issue for a user is a mark against its good name, so my default policy is to fix. Maybe not in the same piece of work that it was found in, but in general, bugs should be fixed.

That is, unless you hit that awkward conjunction of high cost/low incidence.

Why waste a bunch of money fixing a bug that might never happen?

Split Personality

I’m sure you can guess from the mere existence of this blog post that this is exactly the situation we found ourselves in recently.

While evaluating a new component in our legacy application, we noticed that it was technically possible to break what was intended to be an entirely atomic operation consisting of multiple database writes.

Normally this wouldn’t even be worth talking about, as its basically the reason that database transactions exist. When used correctly its a guarantee of an all or nothing situation.

Unfortunately, one of the writes was in Visual Basic 6 code, and the other was in .NET.

I don’t know if you’ve ever tried to span a database transaction across a technology boundary like that, but its not exactly the easiest thing in the world.

When we looked into the actual likelihood of the issue occurring, we discovered that if the VB6 part failed, we could easily just rollback the .NET part. If the write failed in .NET though, we had no way to go back and undo the work that had already been done in VB6. Keep in mind, this entire section was essentially creating transactions in trust accounting application, so non-atomic operations can get users into all sorts of terrible situations.

On deeper inspection, the only way we thought the .NET stuff could fail would be transitory database issues. That is, connection or command timeouts or disconnects.

We implemented a relatively straightforward retry strategy to deal with those sorts of failures and then moved on. Sure, it wasn’t perfect, but it seemed like we’d covered our bases pretty well and mitigated the potential issue as best we could.

I Did Not See That One Coming

Of course, the code failed in a way completely unrelated to temporary connectivity issues.

In our case, we were stupid and attempted to write an Entity Framework entity to the database whose string values exceeded the column size limits. Long story short, we were concatenating an information field from some other fields and didn’t take into account that maybe the sum of those other fields would exceed the maximum.

The write failure triggered exactly the situation that we were worried about; the actual trust account record had been created (VB6) but our record if it happening was missing (.NET).

I still don’t actually know why we bothered implementing column size limits. As far as I know, there is no difference between a column of VARCHAR(60) and VARCHAR(MAX) when it comes to performance. Sure, you could conceivably store a ridiculous amount of data in the MAX column at some point, but I feel like that is a lot less destructive than the write failures (and its knock-on effects) that we got.

Even worse, from the users point of view, the operation looked like it had worked. There were no error notifications visible to them, because we couldn’t write to the table that we used to indicate errors! When they returned to their original action list though, the item that failed was still present. They then processed it again and the same thing happened (it looked like it worked but it was still in the list afterwards) at which point they twigged that something was unusual and contacted our support team (thank god).

Once we found out about the issue, we figured out pretty quickly what the root cause was thanks to our logging and cursed our hubris.

Off With Their Head!

The fix for this particular problem was easy enough and involved two extension methods; one for truncating a string and another for scanning an object and automatically truncating string lengths as per data annotation attributes.

public static string Truncate(this string value, int maxLength, ILogger logger = null)
{
    if (string.IsNullOrEmpty(value))
    {
        return value;
    }

    if (maxLength < 0) throw new ArgumentException($"Truncate cannot be used with a negative max length (supplied {nameof(maxLength)} was [{maxLength}]). That doesn't even make sense, what would it even do?", nameof(maxLength));

    if (value.Length <= maxLength)
    {
        return value;
    }

    string truncated = null;
    truncated = maxLength <= 3 ? value.Substring(0, maxLength) : value.Substring(0, maxLength - 3) + "...";

    
    logger?.Debug("The string [{original}] was truncated because it was longer than the allowed length of [{length}]. The truncated value is [{truncated}]", value, maxLength, truncated);

    return truncated;
}

public static void TruncateAllStringPropertiesByTheirMaxLengthAttribute(this object target, ILogger logger = null)
{
    var props = target.GetType().GetProperties().Where(prop => Attribute.IsDefined(prop, typeof(MaxLengthAttribute)) && prop.CanWrite && prop.PropertyType == typeof(string));

    foreach (var prop in props)
    {
        var maxLength = prop.GetCustomAttribute(typeof(MaxLengthAttribute)) as MaxLengthAttribute;
        if (maxLength != null)
        {
            prop.SetValue(target, ((string)prop.GetValue(target)).Truncate(maxLength.Length, logger));
        }
    }
}

Basically, before we write the entity in question, just call the TruncateAllStringPropertiesByTheirMaxLengthAttribute method on it.

With the immediate problem solved, we were still left with two outstanding issues though:

  • A failure occurred in the code and the user was not notified
  • An atomic operation was still being split across two completely different programming contexts

In this particular case we didn’t have time to alleviate the first issue, so we pushed forward with the fix for the issue that we knew could occur.

We still have absolutely no idea how to deal with the second issue though, and honestly, probably never will.

Conclusion

In retrospect, I don’t think we actually made the wrong decision. We identified an issue, analysed the potential occurrences, costed a fix and then implemented a smaller fix that should have covered out bases.

The retry strategy would likely have dealt with transitory failures handily, we just didn’t identify the other cases in which that section could fail.

As much as I would like to, its just not cost-effective to account for every single edge case when you’re developing software.

Well, unless you’re building like pacemaker software or something.

Then you probably should.

0 Comments

If you’re writing line of business software of almost any description, you probably need to generate reports.

In the industry that I’m current working in, these reports are typically financial, showing the movement of money relating to a trust account.

Our legacy product has been around for a very very long time, so it has all the reports you could need. Unfortunately, they are generated using an ancient version of crystal reports, which I’m pretty sure is possessed by a demonic entity, so they can be a bit of a nightmare to maintain.

For our new cloud product, I’m not entirely familiar with how we deal with reports, but I think that they are generated using HTML and then converted to PDF. Its enough of a feature that there’s a whole reporting subsystem dedicated to the task).

Unfortunately, our most recent development efforts in the legacy product fall somewhere in the middle between terrifying ancient evil and relatively modern report generation processes.

The crystal reports component is VB6 native, and all of our new functionality is written in C# (using WPF for the view layer). We could call back into the VB6 to generate a report, but honestly, I don’t want to touch that with a ten-foot pole. We can’t easily leverage the HTML/PDF generation capabilities of the new cloud product either, as it was never built to be reused by an entirely different domain.

As a result, we’ve mostly just shied away from doing reports as a part of new features.

Our latest feature is a little different though, as it is an automated receipting solution, and a report of some description is no longer optional.

Last responsible moment and all that.

Forming An Opinion

If you’re trying to build out a report in WPF, you’d think that there would be a component there all ready to go.

You’d be mostly wrong, at least as far as native WPF is concerned. There are a few bits and pieces around, but nothing particularly concrete or well put together (at least as far as we could determine anyway).

Instead, most people recommend that you use the Windows Forms Report Viewer, and the systems that it is built on.

We poked at this for a little while, but it just seemed so…archaic and over complicated. All we wanted was to take a View Model describing the report (of our design) and Bind it, just like we would for a normal WPF view.

Enter the FlowDocument.

In The Flow

I’ll be honest, I didn’t actually do the work to build a report out in WPF using FlowDocuments, so most of what I’m writing here is second hand knowledge. I didn’t have to live through the pain, but the colleague that did it assures me that it was quite an ordeal.

At their core, FlowDocuments allow you to essentially create a document (like a newspaper article or something similar) in WPF. They handle things like sizing the content to the available area, scrolling and whatnot, all with the capability to render normal XAML controls alongside textual constructs (paragraphs, images, etc).

There are a few things that they don’t do out of the box though:

  • Pagination when printing. The default paginator is mostly concerned with making pages for a screen (rather than a printed document), and doesn’t allow for headers or footers at all. As a result, we implemented a custom DocumentPaginator that did what we needed it to do.
  • Templated content repetition. If you’re using WPF, and MVVM, you’re probably familiar with the ItemsControl (or its equivalents). If you want to do something similar in a FlowDocument though (i.e. bind to a list of things), you’ll need to put together a custom templating system. This is relevant to us because our report is mostly tabular, so we just wanted a relatively simple repeater.

With those bits and pieces out of the way though, what you get is a decent component that you can use on top of a view model that displays the report to the user in the application (i.e. scrolling, text selection, etc) with the capability to print it out to any of the printers they have available.

Its not exactly a ground breaking advance in the high-tech field of report generation, but it gets the job done without any heavyweight components or painful integrations.

Conclusion

I’m sure there are hardcore reporting components out there that are built in pure WPF and do exactly what we want, but we just couldn’t find them.

Instead, we settled for knocking together some extensions to the existing FlowDocument functionality that accomplished exactly what we needed and no more.

With a little bit of effort, we could probably make them more generic and reusable, and I might even look into doing that at some point in the future, but to be honest now that we’ve done the one report that we needed to do, we’ll probably mostly forget about it.

Until the next time of course, then we’ll probably wonder why we didn’t make it generic and reusable in the first place.

0 Comments

Building new features and functionality on top of legacy software is a special sort of challenge, one that I’ve written about from time to time.

To be honest though, the current legacy application that I’m working with is not actually that bad. The prior technical lead had the great idea to implement a relatively generic way to execute modern . NET functionality from the legacy VB6 code thanks to the magic of COM, so you can still work with a language that doesn’t make you sad on a day to day basis. Its a pretty basic eventing system (i.e. both sides can raise events that are handled by the other side), but its effective enough.

Everything gets a little bit tricksy when windowing and modal dialogs are involved though.

One Thing At A Time

The legacy application is basically a multiple document interface (MDI) experience, where the user is free to open a bunch of different entities and screens at the same time. Following this approach for new functionality adds a bunch of inherent complexity though, in that the user might edit an entity that is currently being displayed elsewhere (maybe in a list), requiring some sort of common, global channel for saying “hey, I’ve updated entity X, please react accordingly”.

This kind of works in the legacy code (VB6), because it just runs global refresh functions and changes form controls whenever it feels like it.

When the .NET code gets involved though, it gets very difficult to maintain both worlds in the same way, so we’ve to isolating all the new features from the legacy stuff, primarily through modal dialogs. That is, the user is unable to access the rest of the application when the .NET feature is running.

To be honest, I was pretty surprised that we could open up a modal form in WPF from an event handler started in VB6, but I think it worked because both VB6 and .NET shared a common UI thread, and the modality of a form is handled at a low level common to both technologies (i.e. win32 or something).

We paid a price from a user experience point of view of course, but we mostly worked around it by making sure that the user had all of the information they needed to make a decision on any screen in the .NET functionality, so they never needed to refer back to the legacy stuff.

Then we did a new thing and it all came crashing down.

Unforgettable Legacy

Up until fairly recently, the communication channel between VB6 and .NET was mostly one way. VB6 would raise X event, .NET would handle it by opening up a modal window or by executing some code that then returned a response. If there was any communication that needed to go back to VB6 from the .NET, it always happened after the modal window was already closed.

This approach worked fine until we needed to execute some legacy functionality as part of a workflow in .NET, while still having the .NET window be displayed in a modal fashion.

The idea was simple enough.

  • Use the .NET functionality to identify the series of actions that needed to be completed
  • In the background, iterate through that list of actions and raise an event to be handled by the VB6 to do the actual work
  • This event would be synchronous, in that the .NET would wait for the VB6 to finish its work and respond before moving on to the next item
  • Once all actions are completed, present a summary to the user in .NET

We’d actually used a similar approach for a different feature in the past, and while we had to refactor some of the VB6 to make the functionality available in a way that made sense, it worked okay.

This time the legacy functionality we were interested in was already available as a function on a static class, so easily callable. I mean, it was a poorly written function dependent on some static state, so it wasn’t a complete walk in the part, but we didn’t need to do any high-risk refactoring or anything.

Once we wrote the functionality though, two problems became immediately obvious:

  1. The legacy functionality could popup dialogs, asking the user questions relevant to the operation. This was actually kind of good, as one of the main reasons we didn’t want to reimplement was because we couldn’t be sure we would capture all the edge cases so using the existing functionality guaranteed that we would, because it was already doing it (and had been for years). These cases were rare, so while they were a little disconcerting, they were acceptable.
  2. Sometimes executing the legacy functionality would murder the modal-ness of the .NET window, which led to all sorts of crazy side effects. This seemed to happen mostly when the underlying VB6 context was changed in such a way by the operation that it would historically have required a refresh. When it happened, the .NET window would drop behind the main application window, and the main window would be fully interactable, including opening additional windows (which would explode if they too were modal). There did not seem to be a way to get the original .NET window back into focus either. I suspect that there were a number of Application.DoEvents calls secreted throughout the byzantine labyrinth of code that were forcing screen redraws, but we couldn’t easily prove it. It was definitely broken though.

The first problem wasn’t all that bad, even if it wasn’t great for a semi-automated process.

The second one was a deal-breaker.

Freedom! Horrible Terrifying Freedom!

We tried a few things to “fix” the whole modal window problem, including:

  • Trying to restore the modal-ness of the window once it had broken. This didn’t work at all, because the window was still modal somehow, and technically we’d lost the thread context from the initial .ShowDialog call (which may or may not have still been blocked, it was hard to tell). In fact, other windows in the application that required modality would explode if you tried to use them, with an error similar to “cannot display a modal dialog when there is already one in effect”.
  • Trying to locate and fix the root reason why the modal-ness was being removed. This was something of a fools errand as I mentioned above, as the code was ridiculously labyrinthian and it was impossible to tell what was actually causing the behaviour. Also, it would involve simultaneously debugging both VB6 and .NET, which is somewhat challenging.
  • Forcing the .NET window to be “always on top” while the operation was happening, to at least prevent it from disappearing. This somewhat worked, but required us to use raw Win32 windowing calls, and the window was still completely broken after the operation was finished. Also, it would be confusing to make the window always on top all the time, while leaving the ability to click on the parts of the parent window that were visible.

In the end, we went with just making the .NET window non-modal and dealing with the ramifications. With the structures we’d put into place, we were able to refresh the content of the .NET window whenever it gained focus (to prevent it displaying incorrect data due to changes in the underlying application), and our refreshes were quick due to performance optimization, so that wasn’t a major problem anymore.

It was still challenging though, as sitting a WPF Dispatcher on top of the main VB6 UI thread (well, the only VB6 thread) and expecting them both to work at the same time was just asking too much. We had to create a brand new thread just for the WPF functionality, and inject a TaskScheduler initialized on the VB6 thread for scheduling the events that get pushed back into VB6.

Conclusion

Its challenging edge cases like this whole adventure that make working with legacy code time consuming in weird and unexpected ways. If we had of just stuck to pure .NET functionality, we wouldn’t have run into any of these problems, but we would have paid a different price in reimplementing functionality that already exists, both in terms of development time, and in terms of risk (in that we don’t full understand all of the things the current functionality does).

I think we made the right decision, in that the actual program functionality is the same as its always been (doing whatever it does), and we instead paid a technical price in order to get it to work well, as opposed to forcing the user to accept a sub-par feature.

Its still not immediately clear to me how the VB6 and .NET functionality actually works together at all (with the application windowing, threading and various message pumps and loops), but it does work, so at least we have that.

I do look forward to the day when we can lay this application to rest though, giving it the peace it deserves after many years of hard service.

Yes, I’ve personified the application in order to empathise with it.

0 Comments

With my altogether too short break out of the way, its time for another post about something software related.

This weeks topic? Embedding websites into windows desktop applications for fun and profit.

Not exactly the sexiest of topics, but over the years I’ve run into a few situations where the most effective solution to a problem involved embedding a website into another application. Usually an external party has some functionality that they want your users to access and they’ve put a bunch of effort into building a website to do just that. If you’re unlucky, they haven’t really though ahead and the website is the only option for integration (what’s an API?), but all is not lost.

Real value can still be delivered to the users who want access to this amazing thing you have no control over.

So Many Options!

I’ve been pretty clear in the posts on this blog that one of things my team is responsible for is a legacy VB6 desktop application. Now, its still being actively maintained and extended, so its not dead, but we try not to write VB6 when we can avoid it. Pretty much any new functionality we implement is written in C#, and if we need to present something visual to the user we default to WPF.

Hence, I’m going to narrow the scope of this post down to those technologies, with some extra information from a specific situation we ran into recently.

Right at the start of the whole “hey, lets jam a website up in here” though process, the first thing you need to do is decide whether or not you can “integrate” by just shelling out to the website using the current system default browser.

If you can, for the love of all that is good and holy, do it. You will save yourself a whole bunch of pain.

Of course, if you need to provide a deeper integration than that, then you’re going to have to delve into the wonderful world of WPF web browser controls.

Examples include:

There are definitely other offerings, but I don’t know what they are. I can extrapolate on the first two (because I’ve used them both in anger), but I can’t really talk about the third one. I only included it because I’ve heard about it specifically.

Chrome Dome

CEFSharp is a .NET (both WPF and WinForms) wrapper around the Chromium Embedded Framework, and to be honest, its pretty great.

In fact, I’ve already written a post about using CEFSharp in a different project. The goal there was to host a website within a desktop application, but there were some tricksy bits around supplying data to the website directly (via shared Javascript context), and we actually owned the website being embedded, so we had a lot of flexibility around making it do exactly what we needed it to do.

The CEFSharp library is usually my first port of call when it comes to embedding a website in a desktop application.

Unfortunately, we we tried to leverage CEFSharp.WPF into our VB6/C# franken-application we ran into some seriously weird issues.

Our legacy application is at its core VB6. All of the .NET code is triggered from the VB6 via a COM interop, which essentially amounts to a message bus with handlers on the .NET side. VB6 raises event, .NET handles it. Due to the magic of COM, this means that you can pretty much do all the .NET things, including using the various UI frameworks like WinForms and WPF. There is some weirdness with windows and who owns them, but all in all it works pretty well.

To get to the point, we put a CEFSharp.WPF browser into a WPF screen, triggered it from VB6 and from that point forward the application would crash randomly with Access Violations any time after the screen was closed.

We tried the obvious things, like controlling the lifetime of the browser control ourselves (and disposing of it whenever the window closed), but in the end we didn’t get to the bottom of the problem and gave up on CEFSharp. Disappointing but not super surprising, given that that sort of stuff is damn near impossible to diagnose, especially when you’re working in a system built by stitching together a bunch of technological corpses.

Aiiiieeeeeee!

Then there is the built-in WPF WebBrowser control, which accomplishes basically the same thing.

Why not go with this one first? Surely built in components are going to be superior and better supported compared to third party components?

Well, for one, its somewhat dependent on Internet Explorer, which can lead to all sorts of weirdness.

A good example said weirdness if the following issue we encountered:

  • You try to load a HTTPS website using TL 1.2 through the WebBrowser control
  • It doesn’t work, giving a “page cannot be loaded error” but doesn’t tell you why
  • You load the page in Chrome and it works fine
  • You load the page in Internet Explorer and it tells you TLS 1.2 is not enabled
  • You go into the Internet Explorer settings and enable support for TLS 1.2
  • Internet Explorer works
  • Your application also magically works

The second pertinent piece of weirdness relates specifically to the controls participation in the WPF layout and rendering engine.

The WebBrowser control does not follow the same rendering logic as a normal WPF control, likely because its just a wrapper around something else. It works very similar to a WinForms control hosted in WPF, which is a nice way of saying it works pretty terribly.

For example, it renders on top of everything regardless of how you think you organised it, which can lead to all sorts of strange visual artefacts if you tend to use the Z-axis to create layered interfaces.

Conclusion

With CEFSharp causing mysterious Access Violations that we could not diagnose, the default WPF WebBrowser was our only choice. We just had to be careful with when and how we rendered it.

Luckily, the website we needed to use was relatively small and simple (it was a way for us to handle sensitive data in a secure way), so while it was weird and ugly, the default WebBrowser did the job. It didn’t exactly make it easy to craft a nice user experience, but a lot of the pain we experienced there was more the fault of the website itself than the integration.

That’s a whole other story though.

In the end, if you don’t have a horrifying abomination of technologies like we do, you can probably just use CEFSharp. Its solid, well supported and has heaps of useful features, assuming you handle it with a bit of care.

0 Comments

Working on a legacy system definitely has its challenges.

For example, its very common for there to be large amounts of important business logic encapsulated in the structures and features of the database. Usually this takes the form of things like stored procedures and functions, default values and triggers, and when you put them all together, they can provide a surprising amount of functionality for older applications.

While not ideal by todays standards, this sort of approach is not necessarily terrible. If the pattern was followed consistently, at least all of the logic is in the same place, and legacy apps tend to have these magical global database connections anyway, so you can always get to the DB whenever you need to.

That is, until you start adding additional functionality in a more recent programming language, and you want to follow good development practices.

Like automated tests.

What The EF

If you’re using Entity Framework on top of an already existing database and you have stored procedures that you want (need?) to leverage, you have a few options.

The first is to simply include the stored procedure or function when you use the Entity Data Model Wizard in Visual Studio. This will create a function on the DbContext to call the stored procedure, and map the result set into some set of entities. If you need to change the entity return type, you can do that too, all you have to do is make sure the property names line up. This approach is useful when your stored procedures represent business logic, like calculations or projections.

If the stored procedures in the database represent custom insert/update/delete functionality, then you can simply map the entity in question to its stored procedures. The default mapping statement will attempt to line everything up using a few naming conventions, but you also have the ability to override that behaviour and specify procedures and functions as necessary.

If you don’t want to encapsulate the usage of the stored procedures, you can also just use the SqlQueryand ExecuteSqlCommandAsync functions available on the DbContext.Database property, but that requires you to repeat the usage of magic strings (the stored procedure and function names) whenever you want to execute the functionality, so I don’t recommend it.

So, in summary, its all very possible, and it will all work, up until you want to test your code using an in-memory database.

Which is something we do all the time. 

In Loving Memory

To prevent us from having to take a direct dependency on the DbContext, we learn towards using factories.

There are a few reasons for this, but the main one is that it makes it far easier to reason about DbContext scope (you make a context, you destroy a context) and to limit potential concurrency issues within the DbContext itself. Our general approach is to have one factory for connecting to a real database (i.e. ExistingLegacyDatabaseDbContextFactory) and then another for testing (like an InMemoryDbContextFactory, using Effort). They both share an interface (usually just the IDbContextFactory<TContext> interface), which is taken as a dependency as necessary, and the correct factory is injected whenever the object graph is resolved using our IoC container of choice.

Long story short, we’re still using the same DbContext, we just have different ways of creating it, giving us full control over the underlying provoider at the dependency injection level.

When we want to use  an in-memory database, Effort will create the appropriate structures for us using the entity mappings provided, but it can’t create the stored procedures because it doesn’t know anything about them (except maybe their names). Therefore, if we use any of the approaches I’ve outlined above, the in-memory database will be fundamentally broken depending on which bits you want to use.

This is one of the ways that Entity Framework and its database providers are something of a leaky abstraction, but that is a topic for another day.

This is pretty terrible for testing purposes, because sometimes the code will work, and sometimes it won’t.

But what else can we do?

Abstract Art

This is one of those nice cases where an abstraction actually comes to the rescue, instead of just making everything one level removed from what you care about and ten times harder to understand.

Each stored procedure and function can easily have an interface created for it, as they all take some set of parameters and return either nothing or some set of results.

We can then have two implementations, one which uses a database connection to execute the stored procedure/function directly, and another which replicates the same functionality through Linq or something similar (i.e. using the DbContext). We bind the interface to the first implementation when we’re running on top of a real database, and to the DbContext specific implementation when we’re not. If a function calls another function in the database, you can replicate the same approach by specifying the function as a dependency on the Linq implementation, which works rather nicely.

Of course, this whole song and dance still leaves us in a situation where the tests might do different things because there is no guarantee that the Linq based stored procedure implementation is the same as the one programmed into SQL Server.

So we write tests that compare the results returned from both for identical inputs, trusting the legacy implementation when differences are discovered.

Why bother at all though? I mean after everything is said and done, you now have two implementations to maintain instead of one, and more complexity to boot.

Other than the obvious case of “now we can write tests on an in-memory database that leverage stored procedures”, there are a few other factors in favour off this approach:

  • With a good abstraction in place, its more obvious what is taking a dependency on the stored procedures in the database
  • With a solid Linq based implementation of the stored procedure, we can think about retiring them altogether, putting the logic where it belongs (in the domain)
  • We gain large amounts of knowledge around the legacy stored procedures while building and testing the replacement, which makes them less mysterious and dangerous
  • We have established a strong pattern for how to get at some of the older functionality from our new and shiny code, leaving less room for sloppy implementations

So from my point of view, the benefits outweigh the costs.

Conclusion

When trying to leverage stored procedures and functions programmed into a database, I recommend creating interfaces to abstract their usages. You are then free to provide implementations of said interfaces based on the underlying database provider, which feels a lot more flexible than just lumping the function execution into whatever structures that EF provides for that purpose. The approach does end up adding some additional complexity and effort, but the ability to ensure that tests can run without requiring a real database (which is slow and painful) is valuable enough, even if you ignore the other benefits.

Caveat, the approach probably wouldn’t work as well if there aren’t good dependency injection systems in place, but the general concept is sound regardless.

To echo my opening statement, working with legacy code definitely has its own unique set of challenges. Its nice in that way though, because solving those challenges can really make you think about how to provide a good solution within the boundaries and limitations that have already been established.

Like playing a game with a challenge mode enabled, except you get paid at the end.