open data
The open data dividend
china dragon
The dragon’s new vassal
parallax background

The blueprint brigade

Do governments really need AI action plans?

June 11, 2025

6 min read

June 11, 2025

6 min read

Photo: Dreamstime.

On typically a grey morning in Westminster, the UK government in January published its AI Opportunities Action Plan, a 50-recommendation blueprint for maintaining its position as ‘the third largest AI market in the world’.

Within weeks, America’s new administration launched its own Request for Information for an AI Action Plan, seeking to ‘sustain and enhance America’s AI dominance’. Meanwhile, China’s New Generation Artificial Intelligence Development Plan, launched in 2017, aims to make the country ‘the global AI leader’ by 2030.

The world’s major powers are engaged in an unprecedented race. Not to build the fastest weapons or the tallest buildings, but to craft the most compelling artificial intelligence strategies. From Brussels to Beijing, governments are frantically drafting action plans, white papers, and strategic frameworks. The question is whether these documents represent serious policymaking or mere political theatre.

The answer, it turns out, is both.

Strategic imperative

The case for government AI action plans rests on a simple premise: artificial intelligence is too important to leave to chance. As Britain’s plan notes, “frontier models in 2024 are trained with 10,000x more computing power than in 2019”. The technology is advancing at breakneck speed, transforming everything from healthcare diagnostics to military strategy.

Countries that fail to plan risk being left behind. The United States currently spends 11.2 billion US dollars annually on AI and IT research and development, whilst China has invested 184 billion US dollars through government venture capital funds in AI firms since 2000. These are not casual investments. They represent a recognition that AI capabilities will determine national competitiveness for decades to come.

Strategic planning allows governments to tackle coordination problems that markets struggle to solve alone. Britain’s plan, for instance, calls for expanding sovereign compute capacity at least 20-fold by 2030—the sort of long-term infrastructure investment that requires state involvement. Private companies excel at developing applications, but building the underlying computational backbone often requires patient government capital.

Consider data infrastructure. The UK plan proposes creating a copyright-cleared British media asset training data set by partnering with institutions like the BBC and British Library. No private firm could negotiate such arrangements across cultural institutions. Only governments possess the convening power to orchestrate such collaborations.

Action plans also help governments avoid regulatory chaos. As Google noted in its submission to America’s AI consultation, there’s a pressing need to “avoid duplicative or siloed AI compliance rules across agencies”. Without central coordination, different departments might impose contradictory requirements, strangling innovation in red tape.

The perils of planning

The most obvious risk of such Ai action plans is the tendency toward grandiose promises. China’s provincial governments have collectively set targets totalling RMB 704 billion (86.12 billion euros) for core AI sectors by the end of this year—nearly double the central government’s national target of RMB 400 billion. Such mathematical impossibilities reveal how political incentives can distort rational planning.

The pace of technological change also makes long-term planning particularly treacherous. Most AI action plans span five to ten years, but the technology evolves in months. Experts estimate China’s AI models are six to twenty-four months behind their American counterparts—a range so wide it suggests fundamental uncertainty about technological trajectories.

Governments also struggle with the pick-a-winner problem. Britain’s plan calls for creating ‘UK Sovereign AI to support private sector partnerships and ‘maximise the UK’s stake in frontier AI’. But states have a poor track record of identifying technological winners. The European Union’s disastrous attempts to create so-called European champions in technology serve as cautionary tales.

More troubling is the potential for action plans to become vehicles for protectionism. The Trump administration’s approach explicitly seeks to revoke previous safety-focused regulations that allegedly hampered the private sector’s ability to innovate. This signals a shift from collaborative international development toward zero-sum competition.

The talent drain represents another danger. China has a much larger number of AI companies developing models than America, leading to dilution of investment and compute resources. Government plans that encourage too many players might inadvertently weaken national competitiveness.

The implementation gap

Perhaps the gravest risk is that action plans become substitutes for action itself. Publishing strategies provides politicians with visible achievements, but execution proves far more difficult. Research by Aircall found that education is actually the biggest barrier to AI implementation, with 63 per cent of small businesses highlighting a lack of understanding of what AI can do.

This implementation challenge applies even more forcefully to governments. The Centre for Democracy and Technology warns against rushing forward on AI adoption, noting it could lead to, “wasted tax dollars on ineffective, ‘snake oil’ AI tools”. Public sectors worldwide are littered with failed technology projects that promised transformation but delivered disappointment.

Procurement presents particular difficulties. Britain’s plan acknowledges the need for faster, multi-stage gated and scaling AI procurement processes and compensation for startups that participate in bidding rounds. But reforming government purchasing remains one of the most intractable challenges in any public administration.

The skills gap compounds these problems. The UK aims to train tens of thousands of additional AI professionals by 2030, but universities and training programmes move slowly. Meanwhile, private companies offer higher salaries and more exciting opportunities for top talent.

A question of balance

The evidence suggests that AI action plans serve important functions despite their limitations. They provide necessary coordination mechanisms, signal government priorities to private investors, and help countries avoid regulatory fragmentation. The key lies in striking the right balance between ambition and realism.

The most effective plans focus on fundamentals rather than flashy initiatives. Infrastructure investment, education reform, and regulatory clarity matter more than vague promises about ‘AI dominance’. Britain’s emphasis on expanding compute capacity and creating mission-focused programme directors represents the sort of practical approach that might actually work.

International cooperation offers another path forward. The UK plan proposes international partnerships with like-minded countries and collaboration through existing frameworks like the EuroHPC Joint Undertaking. Such approaches acknowledge that no single country can master every aspect of AI development.

The planning process itself may matter as much as the final document. Forcing governments to think systematically about AI capabilities, infrastructure needs, and regulatory approaches creates value even if specific predictions prove wrong.

The alternative—muddling through without strategic direction—seems far worse.

The verdict

Governments do need AI action plans, but not for the reasons politicians typically cite. The goal should not be achieving AI dominance or beating other countries in some imaginary technology race. Instead, action plans serve as coordination mechanisms that help countries build the infrastructure, skills, and institutions necessary for an AI-enabled economy.

The best plans acknowledge uncertainty rather than pretending to predict the future. They focus on creating adaptive capacity rather than backing specific technologies. And they recognise that success depends more on execution than grand strategy.

As Britain’s plan concludes, “This is a crucial asymmetric bet—and one the UK can and must make”. The same logic applies globally. In an uncertain technological landscape, the biggest risk may be failing to plan at all. The countries that thoughtfully prepare for an AI-driven future—while remaining humble about their ability to predict its precise contours—will likely fare better than those that stumble forward blindly.

The question is not whether governments need AI action plans. It is whether they can resist the temptation to overpromise and instead focus on building the foundations for long-term success. On that crucial test, the jury remains decidedly out.

Photo: Dreamstime.

Share