Why AI Governance Keeps Failing – And What We’re Missing

Across health, social protection, education, and public services, artificial intelligence is being deployed faster than governance systems can adapt. This gap is especially visible in low- and middle-income contexts, where institutional capacity is already stretched and the consequences of failure are borne disproportionately by vulnerable populations. In response, a familiar set of remedies has emerged: […]

Across health, social protection, education, and public services, artificial intelligence is being deployed faster than governance systems can adapt. This gap is especially visible in low- and middle-income contexts, where institutional capacity is already stretched and the consequences of failure are borne disproportionately by vulnerable populations.

In response, a familiar set of remedies has emerged: ethical AI principles, bias audits, benchmarks, model cards, and pilot evaluations. These efforts are well intentioned, often rigorous, and increasingly standardized. Yet despite their proliferation, they have not produced the level of trust, accountability, or legitimacy that AI deployment in public-interest contexts demands.

Why?

The problem, we believe, is not a lack of ethics frameworks or technical safeguards. There are many of those now. It is that most AI governance efforts are operating at the wrong level of the problem.

The surface diagnosis: more tools, more rules

At the surface, AI governance is framed as a technical and compliance challenge. Models may be biased. Outputs may be unsafe. Systems may behave unpredictably. The solution, accordingly, is to measure, constrain, and audit these systems more effectively.

This framing has driven valuable advances. But it has also led to a fragmented governance landscape: evaluations that are conducted once and forgotten; equity checks that vary by project; community engagement that is expensive, extractive, or absent; and responsibility that is diffused across vendors, developers, and institutions.

From this vantage point, governance appears to be a matter of improving tooling or enforcing standards more consistently.

But this misses a deeper issue.

The systemic failure: decision-making without accountability

Look more closely at how AI is actually deployed in public and social-sector institutions, and a different failure mode becomes clear.

In practice, AI systems are increasingly used to inform or shape decisions that affect people’s access to services, information, and care. Yet the institutional processes surrounding those decisions have not evolved accordingly.

Evaluations are often treated as advisory rather than decision-critical. Equity is discussed rhetorically but rarely operationalized in a way that is repeatable or auditable. Community input is framed as “research” rather than as part of a governed decision process. And when harms occur, responsibility is difficult to trace.

The result is not irresponsible actors, but underpowered governance systems.

Execution systems for building, testing, and deploying AI have scaled rapidly. Governance systems for deciding whether and how deployment is justified have not.

The hidden assumptions shaping AI governance

Beneath this systemic gap lie a set of implicit assumptions that shape much of today’s AI governance discourse.

One assumption is that governance is primarily about controlling models rather than governing decisions. Another is that ethics can be “embedded” at design time, rather than operated continuously. A third is that communities are end users or data sources, rather than rights-bearing stakeholders whose participation must be ethically justified.

These assumptions lead to governance solutions that focus on metrics, benchmarks, and compliance artifacts. While useful, they rarely answer the question that institutions ultimately face:

Can we defend this deployment decision – ethically, socially, and politically – to those affected by it?

A deeper reframing: from model governance to decision governance

Our work over the past several years has led us to a different framing of the AI governance challenge.

The core problem is not how models behave in the abstract. It is how institutions make and justify decisions under uncertainty, using AI as one input among many.

This shift has important implications.

If AI is treated as an autonomous decision-maker, governance becomes an attempt to constrain or supervise it. Responsibility disperses into the system. When outcomes are contested, blame is difficult to assign.

If AI is treated instead as a decision-support instrument, governance must focus on human accountability: who decides, on what basis, using what evidence, and with what safeguards.

This is not a semantic distinction. It changes what governance systems are designed to do.

Governance as operational infrastructure

From this perspective, effective AI governance cannot rely solely on policies, principles, or post-hoc audits. It must be operational, embedded in workflows, invoked repeatedly, and capable of producing evidence.

Such governance systems would:

  • require explicit articulation of intended use, audience, and context
  • surface equity, safety, and readiness risks in standardized ways
  • require responsible humans to record decisions and justifications
  • generate auditable records of those decisions
  • gate access to community validation ethically, rather than treating it as a default

Importantly, these systems would not automate judgment. They would scaffold it.

This distinction matters because some of the most consequential failures in AI deployment arise not from malicious intent or technical incompetence, but from institutional ambiguity – from decisions being made implicitly, without clear ownership or defensibility.

Rethinking community validation

One area where this reframing is particularly important is community engagement.

In many public-interest AI projects, community validation is treated as either an optional add-on or an extractive exercise. Focus groups are convened late, feedback is summarized informally, and participation is rarely compensated.

A governance-first approach treats community validation differently. It recognizes that community time, insight, and trust are scarce resources. Access to them should be earned through prior screening, clear intent, and ethical safeguards.

In this view, community validation is not a substitute for expert judgment. It is a deeper tier of legitimacy that must be justified, governed, and compensated.

Why this matters now

The stakes of AI governance are rising. As AI systems move from pilots to infrastructure, failures will no longer be isolated incidents. They will be systemic.

In contexts where institutional trust is fragile, symbolic governance – the appearance of responsibility without its substance – can be more damaging than no governance at all. It erodes confidence while masking risk. This spoils it for all of us.

What is needed instead is governance that is:

  • legible to institutions and regulators
  • defensible to funders and the public
  • respectful of communities’ agency
  • lightweight enough to operate continuously

This requires moving beyond governance as documentation toward governance as institutional capability.

A closing reflection

Much of today’s AI governance debate remains focused on surface-level interventions: better metrics, better audits, better principles. These are necessary, but insufficient.

The deeper challenge is to build systems that help institutions take responsibility for decisions made with AI, not once, but repeatedly, under uncertainty.

If AI is becoming part of the infrastructure of public and social systems, then governance must become infrastructure too: operational, accountable, and grounded in human judgment.

That is the innovation thesis we believe the next phase of AI governance must confront. And one we are intent on pursuing.

Select the fields to be shown. Others will be hidden. Drag and drop to rearrange the order.
  • Image
  • SKU
  • Rating
  • Price
  • Stock
  • Availability
  • Add to cart
  • Description
  • Content
  • Weight
  • Dimensions
  • Additional information
Click outside to hide the comparison bar
Compare