Every payment modernisation program reaches a moment when the integration work feels like it should be straightforward. The architecture is defined. The target state is agreed. The vendors are selected. The teams are in place. Then the detailed specification work begins—and an uncomfortable truth surfaces: the parties building toward the same outcome are not actually speaking the same language. They share the same high-level concepts, but their detailed data models, their field-level assumptions, and their error-handling expectations diverge in dozens of places that will only matter when transactions start flowing.
This is not a failure of architecture. It is the predictable consequence of building a multi-party integration program where each party has developed its specification in relative isolation. The central bank publishes a standard. The payment network interprets it. Each participant bank implements against their own reading. The vendors build to their own assumptions. By the time detailed technical review begins, the distance between these specifications can be significant—not at the level of concepts, but at the level of the precise codes, formats, and sequencing that determine whether a transaction succeeds or fails.
Programs that find specification misalignment at design time fix it with a document update. Programs that find it at go-live fix it with an incident. The review work that feels like overhead is actually the cheapest form of testing you will ever do.
The discipline of systematic specification review—comparing what each party's technical documentation actually says, field by field, rather than assuming alignment from a shared conceptual framework—is one of the most undervalued practices in payment modernisation programs. It is unglamorous work. It produces findings documents rather than architecture diagrams. It does not feature in programme roadmaps or governance decks. But it is precisely this work that separates programmes that encounter integration failures in production from those that resolve them quietly at the design stage.
What misalignment looks like in practice
Specification misalignment rarely appears as fundamental disagreement about what a system should do. It appears as divergence in the details that seem minor until they are not: a field that one specification treats as mandatory and another treats as optional; an identifier format that differs by two characters in ways that break downstream validation; a status code enumeration that one party has extended and another has not adopted; a fee structure that is calculated differently depending on which specification you read. Individually, each finding looks like a small thing. Cumulatively, they represent a category of integration risk that is very difficult to manage once systems are in test environments and nearly impossible to manage once they are in production.
The programmes that manage this risk well do so through structured pre-integration review: a systematic comparison of each party's specification against the canonical standard, followed by a reconciliation process that forces each discrepancy to a resolution before implementation begins. This is not an architecture activity. It is a programme governance activity—requiring the discipline to hold vendors and internal teams accountable for resolving findings rather than deferring them to integration testing.
The governance structure that makes review effective
Specification review only produces value if the programme has the governance structure to act on findings. In practice, this means three things. First, a clear owner for each specification—a party who has the authority to update their documentation and the accountability to do so on a schedule that does not block integration work. Second, a tracking mechanism that distinguishes between findings that are cosmetic, findings that require one party to change their implementation, and findings that require a policy decision about which interpretation is correct. Third, an escalation path that reaches someone who can make decisions when two parties disagree about whose specification should prevail.
Without this governance structure, review processes produce findings documents that sit unresolved while the programme continues to move forward on the assumption that the findings will be addressed before go-live. They are often not. The findings that are technically straightforward get resolved; the findings that require a party to change their implementation or accept an interpretation they prefer not to accept tend to accumulate until they become the programme's most intractable problems.
Payment modernisation as a data alignment problem
There is a useful reframe available to any programme leader who is trying to understand why payment modernisation programmes are harder than they appear. At one level, these programmes are about replacing ageing infrastructure with modern platforms. At another level, they are fundamentally data alignment problems: ensuring that every party in a complex ecosystem is working from the same understanding of what each data element means, how it is formatted, and how it should behave under every condition the system will encounter.
IPS programmes in particular expose this challenge. Real-time payments create no tolerance for data misalignment: a transaction either processes correctly in seconds or it fails, and the failure has an immediate customer impact. The data alignment work that might be deferred in a batch payment environment—addressed over time through exception handling and reconciliation—must be resolved before go-live in a real-time system. There is no reconciliation window that absorbs the cost of a specification disagreement. The contract must be correct before the first transaction flows.
This is the work that ambitious payment modernisation timelines tend to underestimate. Not the platform implementation—that is well understood. Not the infrastructure migration—that is tractable. It is the patient, systematic work of finding every place where the parties involved in the programme are assuming different things, resolving each assumption to an explicit agreement, and ensuring that agreement is reflected in the code that will actually run. The programmes that budget for this work treat it as unglamorous but essential. The programmes that do not discover, usually late, that the hidden work was the critical path all along.