A financial transaction on the modern web can be a complicated affair. Besides interacting with external payment providers, you may need to: validate bank data using external services; push personal information to a customer relationship management (CRM) platform; record a purchase on an e-commerce platform; store non-financial data for export; extract names and addresses from a digital wallet service (e.g. Apple Pay); do accountancy calculations; or any number of other accompanying tasks. Often these steps need to happen in a very specific order, so transaction IDs can be stored appropriately.

Ten4 builds financial platforms for a lot of different clients, and no two are ever exactly the same. If we built bespoke every time, complexity would quickly balloon as we released multiple incompatible codebases, and any bugs found in one tool would have to be checked against all others and fixed one-by-one.

To stop our team being overwhelmed, we needed a reusable transaction system that could handle extreme flexibility. With that goal in mind, we started to develop Pass Go, a layered transaction-processing library.

Modularity and processors

When building any system, it's important to keep things as modular as possible; if every part of the system does a bit of everything, it's very difficult to understand exactly what your code is doing, and near-impossible to reuse it. We wanted to be able to mix and match pieces of code for each kind of action we needed to do, so we could easily assemble custom transaction flows for many different requirements. We decided to call these pieces of code 'processors'.

To allow the flexibility we needed, each processor responsible for an action had to be completely unaware of other processors in the flow:

If processor X relied on processor Y; and our flow included X but didn't include Y; then X would fail and our 'modular' system would not be very modular.

We'd be looking up lists of compatible processors every time, and the point of this effort would have been entirely missed. Processors, therefore, would have to exist in their own little world, and perform all of their tasks themselves.

Records

But if processors have no knowledge of each other and can't interact, how can they do anything useful together?

The answer is 'records'. A record represents a single transaction, and is the only place where transaction-specific data is stored. Processors would be permitted to use and update the data in records, so the record becomes the vehicle of knowledge collection and sharing as it's passed from one processor to the other.

Records would have parameters for standard transaction information; name, email address, account number, etc, but they could also contain custom 'metadata' parameters. Every processor gets access to the standard stuff, but it may also decide to use or store metadata appropriate to its purpose. One processor could save some metadata that another processor picks up and makes use of later on, as long as this didn't sacrifice modularity.

Flow structure

The question now remained: how do we define a chain of processors, in any sensible order, such that a record can flow through them, trigger the processors' actions, and emerge at the other side loaded with transaction information?

The most obvious choice was a linear list of processors, run one after the other, but we quickly realised that certain processors might want to perform tasks both before and after other processors had completed their job. For example, a transaction entry should be saved in a CRM, then the credit card details processed, then the payment ID saved onto the CRM entry. Using a linear flow meant certain processors would have to be split and used as pairs; a decidedly non-modular architecture.

To solve this problem while preserving the flexibility of our library, we settled on a 'matryoshka' (Russian Doll) or 'onion' model; processors are nested within each other, and records flow from the outer processor to the innermost processor, through each layer, then back out again. Each processor is responsible for running (or not) its child processor, so it can decide what tasks to perform before and after.

Diagram showing out 'onion' processing model, with an arrow (the 'record') flowing through concentric circles representing 'processors'

Above

Records flow from the outer processor to the innermost processor, through each layer, then back out again.

Example

Using the example earlier, we may want to save a CRM entry before taking payment, and then save the payment ID on the CRM entry if payment was successful. Here's how Pass Go would do it, using two processors:

Record
|
 \ CRM Processor > IN: Create CRM entry
 \ Card Processor > IN: Process card payment
 / Card Processor > OUT: -
 / CRM Processor > OUT: Save card payment ID into CRM
|
v

Since the Card Processor saves a payment ID onto the record, the CRM Processor can see it during the outward phase, and make the necessary API calls to save it to the CRM.

Now, perhaps we want a copy of each transaction saved into a local database too. We can add another processor to do that:

Record
|
 \ Local Database Processor > IN: -
 \ CRM Processor > IN: Create CRM entry
 \ Card Processor > IN: Process card payment
 / Card Processor > OUT: -
 / CRM Processor > OUT: Save card payment ID into CRM
 / Local Database Processor > OUT: Save transaction data to local database
|
v

The Local Database Processor is the first and last processor to run, but it does nothing on the way in (because the record doesn't contain all the relevant information yet). Only once every other processor has finished do we have a complete picture of the transaction to save.

Summary

Using our modular building blocks, we've built up a complex transactional flow that is easily modified. If we changed card payment provider, we'd only need to swap out the Card Processor to handle it; the other processors can remain exactly as they are. If we need to introduce another stage to the transaction flow, we'd simply add another processor layer. This model could theoretically even support a branching flow, for really complex applications.

We've been using Pass Go for many months now, in a variety of transactional applications, and it continues to perform solidly. As we continue to build support more and more actions and providers, our library of reusable processors grows, and we can offer our clients more power for less cost, and fewer developer headaches.

Profile photo of John Stewart

We’d love to hear your story. Talk to John