• Insight
  • Wednesday 2nd March 2022

CDM myth #2: it’s for blockchain

This is my second post in the Common Domain Model (CDM) myth-busting series. For a primer on CDM, you may want to check my previous post on the topic (CDM myth #1: it’s a new standard) or Nigel’s excellent 2021 recap (A year in the life of the CDM).

As before, I will attempt to uncover some fundamental truths about the CDM by running against received wisdom.

The CDM and DLT must remain distinct if they are to fulfil their potential to transform the post⁠-⁠trade landscape.

So here we go, another popular myth: “CDM is for blockchain”. I really like this one because, like any good old myth, it features a fantastic beast with a tantalising name.

Why blockchain?

There is some grain of truth in that confusion. Both technologies have the potential to radically streamline post-trade operations. For blockchain, or distributed ledger technology (DLT) more generally, bringing efficiency to post-trade processes is one of its promised applications in capital markets – just follow the VC money. For the CDM, this is its raison d’être, as explained in ISDA’s White paper on the future of derivatives processing that seeded the whole project.

Crucially, this post will argue that the CDM and DLT must remain distinct if they are to fulfil their potential to transform the post-trade landscape.

That the CDM and DLT share such focus area makes sense, given the vast sums being ploughed in every year by firms to maintain and operate the creaking industry post-trade pipework. Together, the technologies could save up to 85% of total costs or $3bn every year according to preliminary industry estimates. However, it is important to recognise the different roles that CDM and DLT play in that equation. While both technologies can be combined, they provide distinct value propositions: they are independent of, and complementary to, one another.

In short, DLT provides a form of database whereas CDM provides a logical model for the data in question. In the case of post-trade processing, the data represent trade events through the transaction lifecycle. But – and this is a massive “but” that fundamentally changes the industry’s operating model – the CDM also provides the functional logic that sits on top of the data. Not only that, it distributes it as machine-executable code. This includes the state-transition logic for generating and processing trade events.

Yes, I can hear the question coming…

Isn’t the CDM a kind of “smart contract” then?

This is where the lines between CDM, DLT and also “smart contract” can blur and become a source of confusion – but also where it is critical to maintain a clear separation. Any smart contract implementation needs a stack of three components:

  • A database or ledger, which could be distributed, or not – that’s where the DLT sits
  • A “coded” representation of contracts recorded on the ledger – that’s where the CDM sits
  • Application(s), which operate as the interface between users (human or machine) and the ledger – e.g. for trade capture

To ensure an inter-operable future, decentralised or not, these three components must remain separate. Otherwise the industry risks replicating the same fragmentation that prevails in today’s trade processing infrastructure into that of tomorrow – probably even worse if decentralised. The field is a mess precisely because every point-solution in the trade lifecycle’s flow (say for confirmation, collateral management, reporting – you name it) comes with all three components shoved into one. And so it never bothers inter-operating with any other point-solution addressing another part of that flow.

The best illustration of this separation is the DRR programme currently tackling trade reporting obligations across the G20 jurisdictions.

For the foreseeable future at least, different trade processes will need to co-exist “on-chain” (think trade pairing, matching and confirmation) and “off-chain” (think risk analytics). Financial institutions will need to hop on and off easily as required, with guaranteed consistency of the coded contract. Even in a pure DLT environment, different trade processes will have different requirements regarding throughput or privacy, which different chain protocols may be better suited to. Finally, even a single ledger should support a variety of applications addressing different parts of the trade lifecycle, from which financial institutions can pick and choose to assemble their middle- and back-office operations.

All this seamless switching is only possible by keeping the ledger, the coded representation and the applications separate. Otherwise, it would be like bundling your TV, internet service provider and video streamer into one – and then trying to navigate between Netflix and Disney+…

How can the industry achieve that separation of concerns?

The industry needs to identify clearly the role that each technology component should play, assign it as its unique purpose in life, and resist any mission creep that tends to occur when technologies are left to develop organically.

At its purest, DLT provides three things (and the definition equally applies to a non-decentralised scenario, i.e. is independent from the “D” in DLT):

  • A storage mechanism, i.e. literally a data – base – that’s the ledger
  • An update mechanism, i.e. how that database evolve, and who can change it
  • A privacy mechanism, i.e. who has permission to see what – for capital markets applications a public, permission-less ledger of the kind that powers most crypto-currencies is a non-starter

Accordingly, here are examples of what the CDM shall not try to provide:

  • A storage specification, such as a database schema – for the difference between that and a logical model, see my earlier post (CDM myth #1: it’s a new standard)
  • Any form of “consensus” mechanism for how new transactions are validated and added to the ledger
  • Support for data privacy – this is particularly relevant for complex lifecycle events that may involve more than two counter-parties, e.g. clearing

On the other hand, DLT must not encroach on the CDM and try to define its own logic – it should remain purely “non-functional”. Perhaps less obviously applications, distributed or not, should be kept independent of the trade processing logic itself.

Reporting from the regulatory front

The best illustration of this separation is the “Digital Regulatory Reporting” (DRR) programme currently tackling trade reporting obligations across the G20 jurisdictions. These obligations can be seen as a form of smart contract between regulated entities and their regulators and DRR leverages the CDM to build a shared, machine-executable interpretation of that “contract”.

For the foreseeable future, different trade processes will need to co⁠-⁠exist “on⁠-⁠chain” and “off⁠-⁠chain”.

Industry participants, financial institutions and technology solution providers alike can use the DRR output to develop their own reporting implementations. Crucially, implementors don’t need to code any of the reporting rules and can focus on building performant, user-friendly compliance solutions and tools – with the certainty that the whole industry will apply the rules consistently.

By the way, some of these solutions could be supported by a DLT, if there’s a use-case for it, but it’s not a prerequisite for DRR.

But… all the cool kids are doing it

In conclusion, tempting as it is for CDM to join the blockchain bandwagon, it will most benefit the industry by staying completely ledger-neutral. Reciprocally, both DLT and their applications will be most valuable to post-trade operations by leveraging what the CDM provides and not redefining their own logical model or trade processing logic.

Please get in touch with REGnosys and see how we can help you benefit from the CDM or DRR.

Leo Labeis

Leo Labeis

Founder & CEO at REGnosys

Read more

CDM Integration

We do the data mappings for you…

Just extract transaction data from your booking, reporting or any other systems, we handle the rest.
We build mappings in immersion with your tech team, as a packaged +/- 2-day workshop that includes valuable CDM training – and is fun (yes, really)!

… And deliver you a packaged output…

Forget data mapping spreadsheets and forget hard-coded translation buried deep into your code base.
What you get is a transparent, maintainable CDM translation dictionary, automatically packaged into an API to translate your internal trade messages.

… Which you can start using right away

Start using the API for testing right away. For production deployment, we offer a range of hosting options that adapt to your technology stack.
The application grows with you. Just edit your dictionary to connect more and more systems to CDM.
Contact us

Data Modelling

Take an existing data pipeline of any form…

Just choose among your existing business processes to experiment a model-driven approach in +/- 2 days.​
We can start from any kind of artefact, from XML messages down to Excel or even PDF documents.

… To demonstrate the model-driven approach…

Our team of data engineering experts works in immersion with you and guide the process from start to finish.​
Our promise: some executable data pipeline running based on your model by the end of the workshop.

… Which you can deploy within your organisation

All it takes is a cross-functional team of developers and non-developers, who is open to a fresh (and fun!) approach.​
The buck doesn’t stop there. That team is now empowered to carry that project forward and build model-based pipelines for your organisation.
Contact us

Reporting Audit

First, map your data into CDM...

We have it covered it. All it takes is a +/- 2-day immersion workshop with your tech team to build your mappings.
As a by-product, you get a ready-to-use API to convert all your trade data

… And access your audit results on-line…

Once your trade data have been mapped, our reporting engine compares its output to your reports and analyses any discrepancy.
Analysis is developed within 6 weeks and results published into a web application.

… Through a powerful user interface

Forget static audit reports that end-up on a shelf.
Our reporting engine is available on-demand to reconcile your reporting process end-to-end and down to single trade flow, through a fully interactive interface.
Contact us

Model-Driven Regulation

Bring-on the regulatory text…

Can be anything in your existing corpus, as long as it’s digestible and relatively self-contained.
The target is to deconstruct that text and reconstruct a model of the regulation in +/- 2 days.

… We’ll handle the rest…

Our team of regulatory and engineering experts works in immersion with you and guide the process from start to finish.
Our promise: some executable code running by the end of the workshop, delivered to you and ready for demonstration.

… And you’re ready to develop rules on your own

All it takes is a cross-functional team, ideally all-encompassing from policy to technology, who is open to a fresh (and fun!) approach.
The buck doesn’t stop there. Your team is now empowered to carry that project forward inside your organisation.
Contact us