Europol’s recent forecast of crime in 2035 reads like speculative fiction with a badge. Autonomous drones repurposed for smuggling. Care robots hacked to harm their patients. AI systems quietly orchestrating fraud at a scale no human gang could manage. It is tempting to dismiss such scenarios as cinematic exaggeration. That would be a mistake.
What the report really exposes is not a coming wave of robot crime, but a present failure of imagination. We are regulating yesterday’s risks with yesterday’s tools, while tomorrow’s systems are already learning, adapting and proliferating at speed.
This is not a familiar story of technology racing ahead of regulation. We have heard that before—with social media, financial engineering, and cybersecurity. The difference this time is qualitative. Artificial intelligence is not just another powerful tool; it is a capability that evolves, combines, and operates at machine scale. The risks it creates are not linear, visible or easily attributable. And that makes them unusually hard for human institutions to grasp.
Blurred boundaries
Most regulatory frameworks still assume clear intent, identifiable actors and discrete events. Crime, in this worldview, has perpetrators, victims and timelines. But AI-driven systems blur those boundaries. Harm may emerge from optimisation rather than malice. Responsibility may be distributed across designers, deployers, users and data sources. By the time something goes wrong, no single decision looks obviously criminal—yet the outcome is deeply damaging.
This is why Europol’s warnings should be taken seriously, not because every scenario will materialise, but because they reveal how far behind our mental models have fallen.
The reflex response, particularly in policy circles, is to double down on expertise. More technical committees. More specialist regulators. More closed-door consultations between governments and technology firms. Expertise matters, of course. But it is no longer sufficient.
What’s missing is broader capability-building—not just among experts, but across society. The most interesting signal here is not Europol’s report itself, but the parallel call in Ireland for citizen assemblies to deliberate on AI governance. That instinct is often misunderstood as a plea for consensus or public reassurance. It is neither. It is an acknowledgement that the risks we face are not merely technical. They are social, ethical and political—and they require collective sense-making.
Building legitimacy
We cannot outsource imagination to engineers alone. Nor can we expect regulators, already stretched and structurally conservative, to anticipate futures they have never experienced. Democratic participation, done well, expands the range of questions we are willing to ask. It surfaces edge cases. It challenges assumptions about what is acceptable, not just what is efficient.
Crucially, it also builds legitimacy. When governance lags behind capability, trust erodes. When trust erodes, compliance weakens. And when compliance weakens, even the best-designed rules fail.
This is a shared failure. Governments have been reactive. Regulators cautious. Technology firms optimistic to the point of complacency. And society itself has been strangely passive, oscillating between hype and fear, rarely stopping to engage seriously with the trade-offs being made on its behalf.
We should also resist the comfort of historical analogy. This moment is not like aviation safety, nuclear power or financial regulation. Those systems were complex, but bounded. AI is general-purpose, combinatorial and increasingly autonomous. It does not sit neatly within sectors or jurisdictions. Treating it as just another emerging risk is precisely the error.
Upgrading how we think
The real danger, then, is not rogue robots or futuristic crime syndicates. It is our collective inability to imagine second- and third-order consequences before they harden into reality. By the time harms are obvious, they are usually entrenched.
Reinvention, in this context, is not about faster laws or smarter algorithms. It is about upgrading how we think. About designing governance that learns, adapts and involves more voices earlier. About accepting that uncertainty is not a temporary condition to be managed away, but a permanent feature of the systems we are building.
If we continue to regulate only what we already understand, we will always arrive too late. The real work now is not predicting the future of crime—it is expanding our imagination quickly enough to govern it.
Photo: Dreamstime.







