AI Governance in Financial Services: What Insurers Should Know About Emerging US and UK Approaches
- Watertrace Limited
- 11 hours ago
- 5 min read
As insurers increasingly embed artificial intelligence into underwriting, claims handling, fraud detection, and operational workflows, governance expectations are evolving just as quickly.
Boards are now faced with a practical challenge: how to ensure AI systems remain transparent, controllable, and resilient while regulatory expectations are still developing.
Across the financial services ecosystem, regulators are increasingly shifting their focus from supporting AI innovation toward strengthening governance and supervisory oversight.

In the United States, detailed implementation tooling for AI governance in financial services is increasingly emerging through industry-led frameworks such as the recent Financial Services AI Risk Management Framework (FS AI RMF), alongside supervisory oversight from multiple federal financial regulators.
In the United Kingdom, AI oversight is developing primarily through existing regulatory institutions such as the Financial Conduct Authority (FCA) and the Bank of England, which are integrating AI supervision into established financial regulatory regimes.
Two distinct governance philosophies are emerging:
a structured, control-led framework developing in the United States
a principles-based, regulator-led supervisory model in the United Kingdom
Both aim to enable responsible AI adoption while maintaining trust in financial systems. For insurance boards and operational leaders, understanding how these approaches differ can help shape governance strategies for the coming years.
The US Approach: Structured AI Risk Governance
The development of sector-specific governance tools in the United States is occurring within a broader national push to accelerate AI innovation. This policy direction was reinforced by the White House’s AI Action Plan, released in July 2025, which outlines national priorities for innovation, infrastructure, and global technology leadership.
In February 2026, the Financial Services AI Risk Management Framework (FS AI RMF) was introduced through a collaboration of financial institutions, industry groups, and public-sector stakeholders. The framework builds on the National Institute of Standards and Technology (NIST) AI Risk Management Framework, translating its principles into practical guidance tailored to financial services operations.
Rather than introducing entirely new regulatory concepts, the framework translates established risk management practices into structured implementation tools for AI systems.
The framework includes several operational resources:
FS AI RMF Executive Summary
AI Adoption Stage Questionnaire
Risk and Control Matrix (RCM)
Implementation Guidebook
Control Objective Reference Guide
Quick-start adoption guidance
At its core, the framework introduces 230 control objectives designed to help financial institutions embed governance across the AI lifecycle.
Organisations are encouraged to assess maturity across four stages: initial, developing, integrated and embedded.
For technology and risk leaders, the value of this model lies in its operational clarity. It provides a structured roadmap for implementing AI governance across areas such as:
model design
training data integrity
operational monitoring
third-party dependencies
incident response
This structured approach aims to ensure that trust and accountability are built into AI systems by design.
The UK Approach: Principles, Accountability, and Regulatory Integration
The United Kingdom has largely adopted a principles-based regulatory model for AI.
Rather than introducing AI-specific legislation for financial services, the UK government and regulators have chosen to apply existing regulatory frameworks to AI technologies.
Financial regulators such as the Financial Conduct Authority (FCA) and the Bank of England oversee AI adoption within their existing supervisory structures.
This reflects the UK's broader regulatory philosophy of principles-based oversight, where firms are expected to demonstrate that emerging technologies operate in line with established regulatory standards.
The FCA’s AI Update emphasises responsible and proportionate adoption of AI across financial services. Firms are expected to ensure their systems align with core regulatory principles including:
safety and robustness
fairness and transparency
accountability
operational resilience
Within this framework, the Senior Managers and Certification Regime (SMCR) plays an important role. It reinforces that senior leaders remain accountable for decisions made by AI systems deployed within their organisations.
However, the UK approach has also attracted scrutiny.
A recent Treasury Committee report (HC 684) raised questions about whether firms currently have sufficient operational clarity when implementing AI within existing regulatory frameworks.
At the same time, regulators are actively building supervisory insight.
The FCA has launched further initiatives, including its Call for Input on the long-term impact of AI in retail financial services (the “Mills Review”), to better understand the systemic and competitive implications of AI adoption.
This suggests the UK model is continuing to evolve, with regulators gathering evidence and industry input before introducing additional supervisory expectations.
Where AI Risk Appears in Insurance Operations
For insurers, AI risk management rarely exists in isolation.
AI models often depend on complex operational ecosystems, including:
bordereaux ingestion pipelines
claims automation workflows
fraud detection models
third-party analytics providers
cloud-based AI infrastructure
Governance frameworks, therefore, need to address more than model explainability.
They must also consider the traceability of data inputs, automated decision paths, and operational dependencies that underpin AI-enabled processes.
In many organisations, the challenge is not simply defining governance policies, but ensuring that data, operational processes, and automated decisions remain observable and auditable across the wider system architecture.
As a result, governance discussions around AI are increasingly moving beyond technical model design toward broader questions of organisational oversight, accountability, and operational control.
Key Governance Considerations for Insurance Boards

For insurance leaders, the more important question is not which regulatory approach is “better,” but how each framework shapes governance expectations.
We identify several themes emerging across both jurisdictions.
Structured Controls vs Principles-Based Oversight
The US FS AI RMF provides detailed implementation tools and maturity staging, offering organisations a structured governance architecture.
The UK approach relies on principles-based oversight, meaning firms must demonstrate that AI systems operate in line with existing regulatory principles rather than following a single prescriptive framework.
Control Design vs Accountability Design
The US model emphasises predefined control objectives embedded within operational processes.
The UK model places greater weight on individual accountability through regulatory regimes such as SMCR.
Supervisory Structure vs Regulatory Integration
The US framework introduces a dedicated governance structure for AI within financial services.
The UK approach integrates AI oversight into existing regulatory frameworks while regulators continue to evaluate whether additional guidance may be required.
Third-Party and Infrastructure Risk
Both jurisdictions recognise the growing concentration risk within AI and cloud ecosystems.
The UK is advancing its Critical Third Parties regime, while US frameworks emphasise shared-responsibility models across AI supply chains.
What This Means for Insurance Leaders
Across both regulatory approaches, one expectation is becoming increasingly clear: organisations must be able to demonstrate traceability, governance, and operational resilience in AI deployment.
For boards, this means ensuring that AI adoption is supported by:
clear accountability structures
observable decision processes
reliable data pipelines
effective monitoring and controls
In practice, AI governance is increasingly linked to the operational architecture that supports automated decisions.
Preparing for the Next Phase of AI Governance
Whether firms adopt the structured control frameworks emerging in the US or operate within the UK’s principles-based model, the direction of travel is similar.
AI adoption in financial services will increasingly require organisations to demonstrate that their systems are:
explainable
controllable
auditable
operationally resilient
For insurers, preparing for this shift involves understanding not only the regulatory frameworks themselves but also the data flows and operational processes that support AI-driven decisions.
Organisations that build this visibility early will be better positioned to scale AI adoption with confidence.
Supporting Responsible AI Adoption
As financial institutions assess how their operational processes and data pipelines support AI governance, it becomes increasingly important to ensure these systems remain transparent, traceable, and controllable.
Watertrace works with leading financial institutions, including insurers, MGAs, banks, and asset managers, to help make operational processes observable, structured, and scalable. This enables organisations to support automation and AI adoption while strengthening governance, resilience, and regulatory readiness.
If your organisation is assessing how operational processes, data flows, or automation pipelines support AI governance, get in touch with Watertrace to explore how structured operational visibility can support safe and scalable AI adoption.
Contact: info@watertrace.com



Comments