A new phone, purchased last weekend, required a new case. Shelling out close on 700 euros for hardware will do that: the fear of dropping a phone (which this clumsy correspondent is prone to do) and causing expensive or irreparable damage makes a further 50 euros or so for a decent case not an indulgence but a necessity.
A few cases caught my eye, but none more so than one which claimed, somewhat brazenly, to be ‘optimised for AI’. My bullshit detector immediately kicked-in. What on earth about a simple phone case could possibly make it optimal for AI? There’s no connection whatsoever between a case and the hardware that could make the phone’s AI capabilities function any better. Surely this was pure marketing?
In the end, I took a calculated risk and purchased a case that was not ‘optimised for AI’. Lo and behold, it has so far (unsurprisingly) performed well. I don’t appear to have sacrificed any AI functionality. Who’d have thought it?
That ‘optimised for AI’ case did get me wondering though. What else about AI is purely marketing? How many firms are ‘optimising for AI’ without really thinking about whether or not the technology adds anything to their businesses? Or if they are merely jumping on a global bandwagon that now almost demands they use (or claim to use) AI, for fear of somehow missing out? (Or worse, not appearing to be as up-to-the-minute as their competitors?)
There’s nothing new about AI washing
Plenty, it turns out. There’s even a term for this sort of thing, AI washing. Like greenwashing before it, AI washing describes the practice of firms exaggerating or outright fabricating their use of artificial intelligence to impress investors, woo customers or simply avoid looking left behind.
The US Securities and Exchange Commission (SEC) has been handing out fines since March 2024, when it charged two investment advisers (Delphia and Global Predictions) with making false claims about how they used AI. In January 2025 the regulator went after Presto Automation, a restaurant-technology company whose supposedly automated AI speech-recognition system turned out to involve considerable human intervention. By April 2025 the SEC and Department of Justice had jointly charged the founder of Nate, a shopping app, with raising over 42 million US dollars by claiming his product ran on AI when in fact it was powered by people manually completing purchases.
These are not isolated grifters, nor is the concept new. Back in 2019, a study by MMC Ventures, a London-based investment firm, found that 40 per cent of European tech companies branding themselves as ‘AI start-ups’ used virtually no AI at all. The label was a fundraising tactic, nothing more. A follow-up survey of 1,200 fintech firms in February 2025 found the picture had barely improved: four in ten self-described ‘AI-first’ companies had zero machine-learning code in production.
The corporate world’s obsession with being seen to do the right thing, or the latest thing (in this case, AI) runs deep. According to FactSet, a data provider, a record 297 S&P 500 companies mentioned AI on their third-quarter 2025 earnings calls, the highest number in a decade. “These companies feel like they have to say they’re doing something with AI, like in ’99 or 2000 where they had to say they were doing something with the internet,” observed Scott Wren, a strategist at Wells Fargo, in a recent piece in the Washington Post. The comparison is apt. During the dotcom bubble, sticking ‘e-’ in front of a company name was often enough to double its share price. Today, dropping ‘AI’ into an earnings call appears to serve a similar function: S&P 500 firms that cited AI saw share-price gains roughly double those that did not.
AI is not an add on
Even firms deploying AI in good faith are struggling to make it pay. A McKinsey survey from early 2025 found that while 88 per cent of companies now use AI in at least one function, fewer than 39 per cent report any impact on operating profits, and for most of those the effect is less than five per cent. BCG’s research is even more blunt: 60 per cent of companies generate no material value from their AI investments. The problem is that most firms treat AI as an add on, rather than something to build around. They buy tools without redesigning workflows, appoint ‘Chief AI Officers’ willy nilly, and announce strategies without measuring outcomes.
None of which is to say AI is worthless, far from it. The technology is transforming medicine, logistics, and making short work of tasks (such as document review, code generation, customer triage) that once consumed entire departments. Companies like JPMorgan Chase have deployed coding assistants that boosted engineer productivity by 10–20 per cent. That is real, measurable, useful.
Broadly, firms that identify a specific problem and apply AI to solve it tend to get results. Those that adopt it because it’s a ‘nice-to-have’, because competitors claim to use it, or because an ‘optimised for AI’ sticker might shift a few more units tend to waste money and, increasingly, attract the attention of bullshit detectors (and eventually, regulators). A phone case might be harmless enough. But across boardrooms from Bucharest to Boston, the instinct to dress up in AI without understanding why is costing real money and delivering precious little in return. AI for the sake of it is no strategy at all. It is, at best, like that phone case, an accessory.
Photo: Dreamstime.






