Intentional intelligence: how insurance and finance leaders can adopt AI without breaking compliance

January 19, 2026
by
Wes Worsfold

The push for AI adoption is driving conversations from boardrooms to watercoolers to Slack channels. However, for leaders in regulated industries, the hesitation to adopt isn't just about technology—it is about risk.

According to a recent report from Zapier, only one in four enterprise companies has adopted AI on an organizational level. While 41% of senior executives stated that delaying AI adoption is causing them to fall behind their competitors, the stakes for insurance and finance are far higher than for their unregulated peers.

So, what’s behind the gap between the business need for AI adoption and the reality of implementation? 

For many leaders in these sectors, it is the anxiety of what happens inside the AI "Black Box". In insurance, for example, you can't deny a claim simply because "the computer said so". Insurance and finance companies need to understand the “why” behind every decision and have the data to back it up.

This hesitation isn't just resistance to change. It is a necessary defence against tangible risks. We are moving into an era where the cost of "move fast and break things" is simply too high for regulated sectors. The data supports this caution. By 2027, over 40% of AI-related data breaches are expected to be caused by improper AI use.

To close the gap between ambition and adoption, we have to change our metric of success. The future of industry-grade AI isn't about raw power; it's about explainability and governance.

What are the problems of AI in insurance and finance?

When it comes to regulated industries like insurance and finance, purchasing a few CoPilot or ChatGPT subscriptions is not going to cut it. Relying on generic tools creates three critical gaps:

  • Data Sovereignty Risks: Public subscriptions often retain user inputs. You cannot risk sensitive policyholder data becoming part of a public model's training set.
  • The "Black Box" Problem: Standard tools provide an answer, but not the audit trail. You cannot satisfy regulators without proving how a decision was reached.
  • Zero Institutional Context: A generic model doesn’t know your specific risk appetite or underwriting history, making its "advice" dangerous to use without heavy oversight.

These vulnerabilities underscore the simple truth that generic models are optimized for generative creativity that prioritizes fluency and speed. Regulated industries, however, require tools optimized for enterprise reliability, prioritizing accuracy, security, and control.

The difference between generative creativity and enterprise reliability

Using ChatGPT to draft a note for a coworker’s birthday code is great. Using it to assess risk on a multimillion-dollar commercial property is not. We might laugh when an AI hallucinates, but in insurance and finance, a hallucination is a liability.

For leaders in these spaces, the nightmare scenario isn't just an incorrect answer. It is a potential compliance nightmare involving a data leakage or discriminatory outcomes hidden inside a black box. If an unmonitored model denies a loan based on biased logic, or if proprietary client data is inadvertently used to train a public model, the damage goes far beyond a software bug. It strikes at the core of the organization's fiduciary responsibility.

This is one of the reasons we’re seeing major financial institutions hesitating to deploy these tools at scale. It comes down to accountability. According to a recent report by Dentons, the banking industry remains nervous about these operational risks. The biggest concern, expressed by 57% of sector respondents, was that a lack of human influence on certain tasks would lead to errors, raising serious questions about liability for AI-generated mistakes.

When the cost of an error is a lawsuit or a regulatory fine, "mostly accurate" simply isn't good enough.

How can regulated industries adopt AI without compliance risks?

How can regulated firms close the gap between caution and innovation? The answer isn't to avoid AI, but to design it with specific guardrails. It’s a concept known as intentional AI, and one that we put to work in every AI-related project.

Intentional AI shifts the focus from "what can the model generate?" to "how can we verify the output?"

For insurance and finance leaders, this requires moving away from the "Black Box" model, where data goes in and a decision mysteriously comes out. Instead, intentional AI takes a "Glass Box" approach where the interface is designed to show the user how the AI reached a conclusion.

For example, instead of an algorithm simply flagging a claim as "Fraudulent," an intentional system would present a dashboard to the claims adjuster saying: "Flagged for review: 87% Confidence Score based on [Factor A] and [Factor B]."

This leads to the most critical component of intentional AI adoption: Human-in-the-Loop (HITL) workflows.

To solve the liability concerns raised in the Dentons report, AI should not be deployed as an autonomous decision-maker, but as a high-speed analyst. It triages the data, highlights anomalies, and cites its sources, but it leaves the final judgment call to the human expert. 

By keeping a human in the loop, you maintain the benefits of automation while retaining the audit trail and accountability required by regulators.

Implementing Intentionality: The BitBakery Approach

Building "Glass Box" systems within a complex enterprise architecture requires a shift in how we build software. At BitBakery, we architect workflows that prioritize data sovereignty and user clarity from day one.

We believe that true intentionality happens at the intersection of design and DevOps.

  • Design-First Governance: We start by designing the user experience to ensure transparency. By focusing on the interface first, we ensure that "confidence scores," source citations, and human-override buttons are central to the workflow. If a claims adjuster cannot easily understand the AI's logic, the design needs to change regardless of the model's accuracy.
  • Secure Architecture: We deploy AI environments that are completely isolated from public models. Whether leveraging private instances on AWS or Azure, we ensure that your proprietary data remains within your virtual private cloud (VPC). Your data trains your insights, and stays there.
  • The "Embedded" Advantage: Generic vendors often lack the context of your specific compliance landscape. Our embedded teams integrate directly with your internal stakeholders—compliance officers, underwriters, and product owners—to translate complex regulatory rules into code.

Innovation Without Recklessness

The winners of the AI race in 2026 will be the companies with the most trusted systems.

It is possible to modernize legacy systems without exposing your firm to the "black box" liabilities that keep leaders up at night. By prioritizing explainability, governance, and a Human-in-the-Loop approach, you satisfy both your innovators and your regulators.

Innovation requires speed, but longevity requires intent.

Are you looking to modernize your legacy workflows without exposing your firm to unnecessary risk? Contact us today to discuss building an Intentional AI roadmap that satisfies both your innovators and your regulators.

BitBakery Logo
Unit 100 - 151 Charles St. W.
Kitchener, ON N2G 1H6
(647) 483-2678