Type something to search...

Governance

Engineering
Who's Accountable for Your AI Agents?
By Lumia Labs/ On 13 Feb, 2026

Who's Accountable for Your AI Agents?

In 2022, a customer asked Air Canada's website chatbot about bereavement fares. The chatbot confidently told him to book a full-price ticket and apply for a partial refund within 90 days. That was wrong. Air Canada's actual policy requires requesting the discount before booking. The customer spent over $1,500 CAD on flights he wouldn't have booked at full price. When he applied for the refund, Air Canada denied the claim. Then they argued the chatbot was "a separate legal entity" and the company wasn't responsible for its statements. A British Columbia tribunal disagreed and ordered Air Canada to pay damages. But "the AI said it, not us" is the defense organizations reach for first. The accountability in agentic AI can be a big challenge. The shift McKinsey is describing McKinsey's 2025 research on the agentic organization frames AI as the largest organizational paradigm shift since the Industrial Revolution. Their model envisions "flat networks of hybrid agentic teams" with "real-time, embedded governance and agentic controls with human accountability." That last phrase does a lot of heavy lifting. It assumes organizations will figure out how to keep humans accountable for systems that act autonomously. Most haven't. A tool does what you tell it. An agent decides what to do. When a developer uses an AI coding assistant, the developer reviews the output and takes responsibility. When an AI agent autonomously processes claims, triages support tickets, or adjusts pricing, accountability blurs. Accountability challenges Nobody designs an unaccountable AI system on purpose. It happens through gaps that individually seem manageable. Diffused ownership Multiple teams contribute to an agent's behavior: the ML team trains the model, the platform team deploys it, the product team defines the rules, the data team manages the inputs. When something goes wrong, each team owns a piece but nobody owns the outcome. Braham and van Hees call this the problem of many hands. The more people involved in a decision, the less any individual feels responsible for the result. Opacity of reasoning When an AI agent makes a decision, even the people who built it often can't explain why. The European Union recognized this in the EU AI Act, which requires high-risk AI systems to allow human oversight and provide explanations for their decisions. The regulation is ahead of actual capabilities across industries. You can't comply your way out of a black box. Speed exceeds oversight AI agents operate at virtually unlimited speeds. A human approval step that adds thirty seconds sounds fine, until agents scale up and the humans in the loop can't keep up. Organizations face a trade-off: slow the agent down enough for human review, or let it run fast. Fast often wins, because it is cheaper. Organizational inertia Even when teams recognize these problems, existing structures resist change. Governance committees move much slower than the developers can ship new AI agents. The org chart wasn't built for systems that cross every departmental boundary simultaneously. Autonomy without accountability is liability Consider the Boeing 737 MAX. The MCAS system made autonomous decisions about flight control, and Boeing didn't adequately inform pilots about its behavior. When the system encountered situations its designers hadn't anticipated, 346 people died. Subsequent investigations revealed diffused accountability across the board: engineers, managers, and regulators all shared responsibility, which meant nobody felt fully responsible. AI agents are already making decisions about credit, healthcare triage, hiring, and content moderation. In the Netherlands, a tax authority algorithm wrongly accused over 26,000 families of fraud. Thousands faced financial ruin. The entire cabinet resigned. The consequences don't need to look like a plane crash to be devastating. Stanford's Human-Centered AI Institute maintains a collection of policy resources documenting how organizations deploying AI systems consistently underestimate the governance needed. The technology moves fast, governance moves slow, and harm happens in between. What we think organizations should do Governance before autonomy costs money: more people, slower release cycles, developer time. It is a big risk not to spend this money however. Assign outcome owners, not component owners Every AI agent needs a single person accountable for what it does in production. One owner for outcomes, not one per component. This person needs authority to shut the agent down when something goes wrong. Build observability before autonomy You wouldn't deploy a critical service without monitoring and alerting, that's what DevOps is all about. AI agents need the same treatment: logging of decisions, monitoring and automated alerts when behavior is different from expectations. Without observability, governance is guesswork. Define your model explicitly Decide upfront whether humans review decisions before they happen (human-in-the-loop), after they happen (human-on-the-loop), or only when anomalies are detected (human-over-the-loop). Each model has different risk profiles. If you don't choose, you get 'human-over-the-loop', which can be too late. Design for explainability When building AI agents, include decision logging and reasoning traces as core requirements. Run pre-mortems Before deploying an agent, ask: "If this agent causes harm, who is accountable and how will we know?" If nobody can answer clearly, the agent isn't ready for production. The governance gap is a leadership problem Organizational structures to govern AI agents lag behind. The "real-time, embedded governance" that McKinsey's research envisions is the right destination, but getting there requires deliberate work on accountability structures, oversight models, and organizational culture. The organizations that figure this out first will build the trust, internal and external, that lets them deploy AI agents more ambitiously. That trust comes at a cost: dedicated governance roles and engineering effort spent on observability instead of features. But sustained innovation runs on accountability. Without it, ambition becomes liability.Lumia Labs partners with organizations building governance and engineering practices for AI agents. If you're working through how to deploy AI autonomy responsibly, let's talk.