Insurance is entering a decade defined by agentic AI, automation, and outcome‑driven digital decision‑making. At the same time, regulators in the United States, Canada, and Europe are establishing formal, enforceable expectations that govern how AI systems must be built and used.
This means the next insurance operating model must be built around three mutually reinforcing pillars: agentic, data‑driven, and compliant. Compliance, anchored in NAIC guidelines, OSFI Guideline E‑23, and the EU AI Act, is no longer an optional safety layer—it is the legal infrastructure that determines how modern AI can operate within the insurance sector.
Insurers that embed regulatory alignment into their operating model will scale more confidently, reduce risk, and build trust with policyholders, regulators, and partners.
Why Agentic AI Changes the Operating Model but Compliance Defines Its Boundaries
Agentic AI introduces new capabilities such as autonomous planning, tool usage, multi‑step reasoning, and cross‑workflow orchestration. These systems change how underwriting, claims, and fraud processes operate. They reduce manual handoffs, increase speed, and generate consistent outcomes.
However, the legal frameworks governing AI require insurers to define:
• Where is human approval required?
• How are transparency and fairness maintained?
• How are decisions documented and explained?
• How models are risk‑tiered, validated, and monitored?
• How external vendors and tools are governed?
• How consumer impacts are identified and mitigated?
Agentic AI may transform execution, but NAIC, OSFI, and the EU AI Act define the rules that determine what is acceptable, safe, and legally compliant.
The Three Pillars of the Next Operating Model
1) Agentic: Coordinated Digital Workflows with Human Oversight
Agentic systems enable insurers to automate multi‑step workflows using specialized agents: intake agents, document analyzers, risk evaluators, customer communication agents, and fraud detection systems. An orchestrator coordinates these agents, ensuring the right work goes to the right system or human at the right time.
Human‑in‑the‑loop oversight remains central. Material, judgment‑based decisions—such as rating changes, declinations, claim settlements, or fraud referrals—must always pass through defined approval gates. Every action taken by an agent must be logged for auditability.
2) Data‑Driven: Governed, Traceable, High‑Quality Data Foundations
Regulatory frameworks place heavy emphasis on the quality, governance, and transparency of data. For insurers, this means:
• Reliable, curated data products for underwriting, claims, and billing
• Documented lineage for each data source and transformation
• Quality checks for representativeness and bias
• Access controls that minimize exposure of sensitive or personal data
• Monitoring that connects data behavior to model outputs
In the regulated AI landscape, data is not simply fuel—it is evidence.
3) Compliant: NAIC, OSFI, and the EU AI Act as the Operating Backbone
Compliance is the structural pillar that shapes everything else. Insurance regulators across the world are formalizing expectations for AI systems, and insurers must align their operating models accordingly.
NAIC (United States)
NAIC guidance requires insurers to maintain an enterprise‑wide approach to AI governance. This includes documented governance programs, oversight of all models and automated systems, fairness monitoring, clear explanation of decisions, and proactive management of third‑party vendor risks.
OSFI Guideline E‑23 (Canada)
This guideline establishes a legally enforceable framework for model risk management for all federally regulated financial institutions. Insurers must maintain complete model inventories, classify models by risk, validate models independently, track data quality, document lifecycle changes, and oversee vendor involvement.
EU AI Act (Europe)
The EU AI Act classifies most insurance pricing, underwriting, and risk systems as High‑Risk AI. High‑Risk systems face strict requirements: documentation, human oversight, testing, monitoring, data governance, incident reporting, and structured record‑keeping. Even non‑EU insurers must comply if serving EU‑exposed business.
Compliance is not a constraint. It is the operating system for safe, explainable, and trustworthy automation.
Target‑State Operating Model
A. Strategy and Governance
• Business goals aligned with regulatory and AI objectives
• A governance council bringing together legal, risk, actuarial, business, and technology leaders
• A risk‑tiering method that determines required controls for each model or agentic workflow
B. Agentic Orchestration Layer
• Orchestrator agent that enforces workflow rules and logs all actions
• Specialist agents for claims, underwriting, fraud, and customer service
• Human‑in‑the‑loop checkpoints for regulated or high‑impact decisions
C. Data and Knowledge Layer
• Curated and governed data products
• Standardized data contracts and access policies
• Controlled retrieval corpora
• Documented lineage and traceability for audit review
D. Controls and Assurance Layer
• Model and agent inventory with clear risk classifications
• Ongoing validation and monitoring that support fairness, accuracy, and consistency
• Compliance with NAIC, OSFI, and EU AI Act documentation expectations
• Vendor oversight, explainability standards, and emergency control mechanisms
E. Platform and Integration Layer
• Observability for agent actions, prompts, and tool usage
• Guardrails and safety filters
• API‑based integration with policy, billing, claims, and data platforms
• Immutable audit logs for internal and external review
Use‑Case Patterns with Compliance Built In
1) Underwriting Triage Agent
• Automated intake and document understanding
• Human approval required before binding or declining
• Traceable reasoning and data lineage
2) Claims FNOL‑to‑Settlement Agent
• Automated evidence capture and liability assessment
• Human checkpoints for settlement and liability outcomes
• Monitoring for geographic or claimant‑based bias
3) Fraud and SIU Investigation Agent
• Pattern detection and evidence assembly
• Human investigator review for every suspicious case
• Documented model impact and versioning
Governance as Competitive Advantage
When governance and compliance are embedded into AI workflows from the beginning, insurers benefit from:
• faster deployment cycles
• reduced regulatory risk
• fewer surprises during internal audits or reviews
• increased trust among customers and regulators
• more predictable implementation outcomes
Compliance is not a blocker—done well, it becomes an accelerator.
What Success Looks Like in 12 Months
• Several agentic workflows running with defined human oversight
• A unified governance framework satisfying NAIC, OSFI, and EU AI Act requirements
• Audit‑ready documentation and data lineage
• Demonstrated improvements in accuracy, cycle time, and operational consistency
• A scalable, compliant operating model that can expand across business lines
Final Take
The next insurance operating model is built on three pillars: agentic, data‑driven, and compliant. Compliance under NAIC, OSFI, and the EU AI Act is the legal and operational backbone that ensures AI is safe, explainable, and fair.
Insurers that embed these frameworks into their AI strategy will meet regulatory expectations while unlocking the full potential of agentic automation.
