When a financial institution decides to outsource operations of its integration platform to a managed services provider, the governance conversation almost always centres on service levels. What is the acceptable uptime? How fast must incidents be resolved? What penalties apply when targets are missed? These are legitimate questions, and the answers matter. But the service level agreement is not the governance instrument that most determines whether the engagement delivers value or quietly creates risk. The question that matters more is: where does design authority sit?
A managed services agreement governs what happens after a system is built. It sets response times, availability commitments, and escalation paths. What it does not govern—unless the institution deliberately designs it to—is the quality of the design decisions that accumulate over time into the operational risk the provider is then paid to manage. An institution that outsources operations without retaining design authority has not reduced its risk. It has transferred visibility over the source of that risk to a party whose incentives are not perfectly aligned with its own.
The service level agreement tells you what happens when something goes wrong. The design governance structure determines how often something goes wrong, how serious it will be, and who is accountable for preventing it from happening again.
The discipline of defining the design authority boundary—which artefacts the institution owns, which artefacts the provider produces, and what gates exist before provider-produced designs become implementation—is the governance work that separates institutions that manage their vendors from institutions that are managed by them.
What the SLA does not govern
Service level agreements measure outcomes: uptime, incident resolution time, deployment success rates. What they do not measure is the quality of the decisions that produce those outcomes. A provider can consistently hit its resolution time targets while making implementation choices that increase the frequency and severity of incidents over time. It can meet its deployment success rate target in the short term by building systems that are fragile under load or difficult to change. The SLA captures the symptom. It does not constrain the cause.
This gap matters most for integration platforms, which sit at the centre of an institution's interoperability. Integration systems mediate between core banking, payments, digital channels, and external partners. Their design choices—how services are modelled, how errors are handled, how change is managed, how security is enforced—have consequences that extend far beyond the managed services perimeter. A design decision made quietly by a provider team can create an operational fragility that costs multiples of the managed services contract to resolve.
The institution that has not retained design authority over these decisions is in the position of discovering the consequences after implementation rather than shaping the choices before it. That is a governance failure, not a service delivery failure—and it cannot be remedied by tightening the SLA after the fact.
The design authority split that protects institutional interests
The governance model that manages this risk establishes a clear boundary between the artefacts the institution owns and the artefacts the provider produces. High-level design and system design documentation—the architecture decisions that determine how the platform is structured, how it integrates with the broader estate, and how it supports institutional strategy—remain with the institution. The provider produces detailed implementation designs against that framework, not independent of it.
What transforms this into a governance mechanism rather than a documentation exercise is the sign-off gate. Before any provider-produced detailed design becomes the basis for implementation, it must be reviewed and approved by the institution's architecture authority. This gate is not a bureaucratic checkpoint. It is the moment at which the institution exercises its design authority in practice: confirming that the provider's implementation approach aligns with institutional standards, does not introduce dependencies the institution does not want, and reflects the security and operational requirements the platform must meet.
Without this gate, the design authority split exists on paper but not in practice. The provider builds to its own assumptions. The institution reviews the result after implementation, by which point the cost of course correction is significantly higher. The sign-off gate is what makes retained design authority real rather than theoretical.
What vendor proposals reveal about accountability appetite
The terms a managed services provider proposes before a contract is signed reveal a great deal about how it intends to manage accountability during the engagement. Two patterns appear consistently in proposals that are designed to be accepted rather than scrutinised.
The first is baselining: a request to establish service level targets based on observed performance in the first months of the engagement, rather than committing to specific targets from Day 1. The stated rationale is reasonable—the provider needs time to understand the existing environment. The practical consequence is that the institution is committed to the engagement for its duration while the provider retains the flexibility to set targets at a level it knows it can achieve, not the level the institution actually needs. Baselining as a blanket mechanism for all service levels is a governance risk; baselining with defined exit criteria, time limits, and Day 1 commitments for services on known infrastructure is a legitimate operational approach.
The second is the penalty threshold: the number of consecutive breaches required before a contractual consequence applies. A threshold of three consecutive breaches for the most operationally critical service levels means the provider has effectively absorbed six to nine months of underperformance before facing any formal accountability. For an integration platform where stability directly affects transaction processing, customer experience, and regulatory obligations, this is not a reasonable commercial position. It is a signal about where the provider expects performance pressure to come from during the engagement.
The negotiation phase is when these signals can be addressed. Once the contract is signed, the institution's leverage is the relationship and the threat of non-renewal. Before it is signed, every term is an open question. The findings that surface from a systematic review of a proposed SOW and SLA framework are not procurement technicalities—they are governance decisions about what accountability the institution is actually buying.
The scope question that determines everything else
Managed services agreements for complex technology platforms frequently contain scope ambiguities that seem minor at the time of signing and become significant as the engagement matures. The most consequential is the boundary between in-scope and out-of-scope applications: which systems the provider is responsible for, how that boundary is defined, and what happens when a new system is introduced or an existing system is modified.
For integration platforms in particular, scope must be tied to the technology stack that defines the managed boundary, not to an implicit assumption about what the provider will consider reasonable to support. An agreement that does not define scope with sufficient precision creates a category of disputes that will recur throughout the engagement: the institution believes a system is in scope; the provider believes it is not; neither party has a clear mechanism for resolving the disagreement. The cost of this ambiguity is paid in escalation time, relationship friction, and eventually in the operational gaps that neither party has accepted formal responsibility for.
Resolving this requires a named application inventory as an addendum to the agreement: a list of systems that are definitively in scope, the technology boundaries that define the managed perimeter, and a change process for updating that inventory as the estate evolves. This is not an unusual ask. The fact that it is frequently absent from initial proposals suggests it is frequently omitted by design rather than by oversight.
The negotiation as governance architecture
The review of a managed services SOW and SLA framework is not a procurement activity that happens before the real work begins. It is the design of the governance architecture that will shape every significant decision made during the life of the engagement. What the institution accepts in the contract is what it will live with for three to five years: the accountability model, the design authority boundary, the scope definition, the performance expectations, and the consequences when expectations are not met.
Institutions that treat this review as a legal and commercial formality—to be completed quickly so that the engagement can start—typically discover its importance the first time a significant operational failure occurs and the question of accountability cannot be answered clearly by reference to the contract. Institutions that invest in systematic review—finding each gap, establishing a clear position on each commercial term, and requiring resolution before signature—are buying something more than a service. They are buying the governance conditions under which a service relationship can actually work.
In a market where managed services for complex technology platforms is becoming a standard operating model, the capability to review and negotiate these agreements rigorously is itself a strategic capability. The institution that can articulate precisely what it requires, identify where a proposal falls short, and negotiate to a position that protects its interests has not just secured a better contract. It has demonstrated the institutional maturity to govern a multi-vendor operating model at scale.