The risk committee agenda tells its own story. Tucked between cyber-attacks and climate exposure, a new line has appeared in hundreds of blue-chip annual reports: artificial intelligence (AI). Not as strategy. As risk. In the US, analysis of large filings shows the share of companies flagging AI as a material threat jumping sharply in the past two years, with hundreds of major firms now explicitly acknowledging reputational, legal or operational dangers.
Regulators have noticed. Supervisors at the US Securities and Exchange Commission have already warned that many of these paragraphs are boilerplate, long on hypotheticals and short on concrete mitigation. When artificial intelligence is significant enough to feature in the risk factors, it is significant enough to demand more than recycled wording. It requires a clear view of where AI sits in the business, who owns the consequences and under what conditions the organisation is prepared to switch a system off.
Too much enforcement, or too little?
Across the pond, Europe is writing its own chapter. The EU AI Act is moving from negotiation to implementation: bans on a handful of practices, obligations for general-purpose systems and a detailed regime for ‘high-risk’ uses are all due to come into force over the next few years. At the same time, Brussels is signalling staggered enforcement and grace periods, including proposals to delay some fines for transparency breaches and to give existing models time to adapt. Stating that tension is not a criticism; it is simply the reality of drafting the first major rulebook for a technology that refuses to sit still.
Governments are not waiting for that rulebook to settle. The OECD’s work on ‘governing with AI’ now tracks around 200 public-sector use cases across its members, from fraud detection and case triage to smarter inspections and tax compliance. Yet only a minority of administrations have a formal investment framework for AI, and incident reporting is patchy at best. AI is already in the plumbing of welfare, justice and licensing systems, often without the governance muscle you would expect for anything touching people’s lives at scale.
An uncomfortable pattern
Put those three corners together–board disclosures, developments in Brussels and the civil service–and the pattern is uncomfortable. Future-ready leaders align risk, regulation and usage. In most systems today, those three are out of sync. On paper, AI is a top-tier risk. In the regulatory arena, it is an evolving compliance project. In day-to-day operations, it is a fast-spreading set of experiments. The gaps between those views are where the real exposure sits.
Corporate risk culture is the first gap. When boards sign off on AI as a material risk but delegate its reality to a scattered collection of pilots, innovation teams and vendor contracts, they create a mismatch between narrative and control. A paragraph in the annual report cannot substitute for a map of where AI is actually used in the business, which decisions it influences and which systems cross the line into life-shaping territory: credit decisions, pricing, hiring, safety-critical operations. If you cannot point to those systems on a diagram, you are not managing the risk; you are describing it.
The second gap is regulatory. The AI Act is not a distant Brussels curiosity; for any company with European exposure, its bans, categories and obligations are now a design constraint, whether they like the law or not. Yet talk of grace periods and phased fines creates a temptation to wait. “We will move when the final guidance lands” sounds prudent, but in a fast-adopting organisation it is effectively a decision to let AI spread without a coherent governance frame. Future-ready leaders read the direction of travel and act now, rather than treating 2026 as the year in which someone else will tell them what responsible AI looks like.
Frontiers of concern
The public sector shows what happens when usage runs ahead of governance. Around the world, administrations are piloting AI in debt collection, eligibility checks and procurement. Albania has gone further, appointing Diella, an AI system, as a virtual minister for public tenders—celebrated as a leap in transparency by some, dismissed as political theatre by others. The symbolism is striking, but the harder question is what sits underneath the branding. Who audits the model? Who explains its decisions to a losing bidder, or to a citizen denied a service? Who is accountable when an opaque system makes a quiet, wrong call?
This is the frontier that should worry boards most: quiet, largely untested systems making life-changing judgements without clear thresholds, human oversight or lines of accountability. The same pattern can emerge inside any large organisation if AI is allowed to creep into core decision-making while remaining framed as a distant compliance topic or a glossy innovation story.
Alignment, in this context, is not a slogan; it is a discipline. On risk, it means moving from disclosure to design, building AI into the core risk architecture of the firm rather than leaving it as a paragraph in the report. On regulation, it means treating emerging rules as a floor, not a finish line, and using their categories as a lens on your own portfolio, even outside the jurisdictions where they formally apply. On usage, it means making experimentation boringly governed, so that pilots sit inside clear parameters instead of quietly redrawing the organisation’s social contract with its customers and employees.
AI will not save or sink a business on its own. Misalignment might. A technology that appears in risk factors on paper, in legal briefings in Brussels and in experimental systems deep in the organisation is not yet being led; it is being tolerated. The leaders who stay relevant will be the ones who close that triangle, so that when AI does shape a life-changing decision, they can explain not only what happened, but why they were right to let the system act at all.
Photo: Dreamstime.







