The European Union has discovered a new weapon in its battle for technological relevance: the AI factory. Brussels plans to deploy at least 15 such facilities by 2026, backed by 10 billion euros in public funds. A further 20 billion euros is earmarked for five ‘AI Gigafactories’ housing more than 100,000 AI processors each. Commissioners speak of “dynamic ecosystems” that will unite computing power, data and talent.
Europe, they promise, will become the world’s hub for trustworthy artificial intelligence.
Strip away the rhetoric and the picture loses its gloss. An AI factory is, in layman’s terms, a supercomputing centre that has been optimised for machine-learning workloads and opened to start-ups. Think of it as Infrastructure-as-a-Service with a catchier name. The EU already boasts world-class supercomputers such as Spain’s MareNostrum 5 and Germany’s JUPITER, historically used for weather modelling and particle physics.
Now these machines—or rather, upgraded versions of them—will be repurposed to train large language models and other AI systems. European start-ups will get ‘privileged access’ to chip time. A one-stop shop will provide support services. Innovation will bloom.
Intellectual capture
Or so the theory goes. The reality may prove less inspiring. For one thing, the terminology is misleading. Jensen Huang, chief executive of Nvidia, has spent the past year evangelising about AI factories—data centres stuffed with his company’s graphics processing units (GPUs), churning out intelligence on demand. His vision is less about public infrastructure than private profit. That Brussels has adopted the same language suggests a degree of intellectual capture.
More worrying is the track record of similar initiatives elsewhere. After ChatGPT’s debut in late 2022, China made AI infrastructure a national priority. Local governments raced to build ‘smart computing centres’, with over 500 announced by 2024 and at least 150 completed. Chinese news outlets now report that up to 80 per cent of this newly built capacity sits idle. Many projects were driven by subsidies and cheap electricity rather than genuine demand. The industry has left a trail of empty buildings and unprofitable ventures.
Europe’s planners insist their approach differs. The AI factories will leverage existing supercomputing infrastructure, they say, rather than building from scratch. Academic rigour will temper commercial exuberance. Yet questions persist. AI hardware becomes obsolete with alarming speed—Nvidia releases new chip architectures every six to nine months. Data centres built in 2024 are already unsuitable for today’s most demanding workloads.
Why build massive infrastructure at all?
The costs extend beyond silicon. High-speed interconnects—the networking fabric that ties thousands of processors together—can account for 15 per cent of capital expenditure in large GPU clusters. Power and cooling requirements are extreme. Some facilities are exploring liquid cooling at pressures and temperatures never before seen in commercial data centres.
Then there is the utilisation problem. Training a large AI model requires enormous computational resources for weeks or months. Once training concludes, however, the hardware often sits idle. Unlike database servers that hum along 24 hours a day, AI training infrastructure experiences feast and famine. This makes the economics precarious. As one industry executive put it, companies are “losing money because of these efforts” to keep pace with the latest chips.
The emergence of DeepSeek, a Chinese start-up, further complicates the picture. Earlier this year DeepSeek released an open-source AI model that rivals OpenAI’s offerings whilst using far fewer resources. At 0.10 US dollars per million tokens—compared with OpenAI’s 4.40 US dollars—DeepSeek demonstrated that clever algorithms can substitute for brute computational force. The revelation rattled the industry. Microsoft cancelled American data-centre leases. The question became unavoidable: if you can train competitive models more efficiently, why build massive infrastructure at all?
Europe’s AI start-ups do face genuine constraints. Private investment in AI reached 292 billion euros in America in 2024, 88 billion euros in China and just 43 billion euros in the EU. Access to cutting-edge compute remains a bottleneck for small firms trying to train proprietary models. The AI factories address this by providing free access to supercomputing time for approved projects—a sensible use of public infrastructure.
Don’t break anything
The devil, however, lurks in the details. Start-ups seeking access must demonstrate they are developing ‘ethical and responsible’ AI aligned with European values. Translation: only those deemed sufficiently trustworthy by Brussels need apply. Given the EU’s fondness for regulation—the AI Act categorises systems by risk level and imposes stringent requirements on high-risk applications—one wonders whether the most innovative firms will bother with the bureaucratic maze. Silicon Valley’s move-fast-and-break-things ethos sits uncomfortably with Europe’s precautionary approach.
The broader question is whether pouring billions into AI infrastructure tackles Europe’s real problems. The continent has world-class researchers and universities. What it lacks is a thriving ecosystem for scaling companies. Regulatory burdens, fragmented markets and risk-averse capital conspire to keep startups small. No amount of supercomputing time will change that. America’s AI dominance rests not on superior hardware but on a willingness to fund ambitious bets, tolerate failure and allow successful firms to grow rapidly.
There is also an environmental reckoning to consider. Data centres worldwide are projected to emit 2.5 billion tonnes of CO2 by 2030. The supercomputing industry is particularly carbon-intensive. Many facilities are being built in regions already vulnerable to rising temperatures, compounding heat risks for nearby communities. Water usage poses another concern—xAI’s Memphis supercomputer faced criticism for its impact on local aquifers. Europe’s AI factories will face similar scrutiny, particularly as the bloc pursues ambitious climate targets.
Europe’s deeper challenges
None of this means AI factories serve no purpose. Pooling computational resources makes sense for a continent that lags in private AI investment. Providing start-ups with access to expensive hardware could level the playing field, at least marginally. But calling them ‘factories’ overstates their novelty and importance. They are shared supercomputers with some extra GPUs and a friendlier booking system. Useful? Perhaps. Revolutionary? Hardly.
The danger is that politicians will declare victory once the facilities are operational, without addressing the deeper challenges facing European tech. As one Chinese data-centre executive observed after his country’s AI infrastructure boom went bust, “what stands between now and a future where AI is actually everywhere is not infrastructure any more, but solid plans to deploy the technology.” Europe would do well to heed that lesson. Building AI factories is the easy part. Building AI companies that matter will require harder choices about regulation, risk and what kind of innovation the continent truly values.
Brussels may discover that in the race for AI supremacy, having the fanciest facilities matters less than having the freedom to use them.
Photo: Dreamstime.







