The recent forced sale of the American version of the popular video-sharing app TikTok (not the company globally) to an investor group led by US billionaire Larry Ellison is a highly significant geopolitical event. It shows not only how essential social media have become in the global media landscape, but also how crucial it is to control the algorithms that shape what users see.
TikTok entered the US market in 2017 and quickly became popular—especially among young people. In just under eight years of availability in the United States, the app has allowed a foreign actor to influence American public opinion to an unprecedented degree—with already major political consequences.
Until last week, algorithms effectively controlled by the United States’ greatest geopolitical rival determined what kind of media content millions of Americans were served daily—representing immense power. With realism returning to international politics and in an era of hybrid warfare, this can be used as a weapon.
Tangible outcomes
Regardless of what one thinks about the morality of Israel’s military operation in Gaza, many—including Benjamin Netanyahu himself—believe that TikTok is the main reason for the sharp rise in anti-Israeli and antisemitic attitudes within the United States, by deliberately promoting pro-Palestinian content on the platform. Although it is difficult to quantify exactly how much TikTok has contributed to turning American public opinion against Israel, it has undoubtedly played a role.
The app has also likely contributed to certain domestic political outcomes in the U.S. that will soon result in concrete political power for unexpected actors. New York’s incoming mayor, the strongly left-leaning—by American standards—Democrat Zohran Mamdani, would likely never have won the Democratic nomination without help from the app.
Shaping public opinion
Regardless of where one stands on the Israeli-Palestinian conflict, or what one thinks of Mamdani, this development is troubling for the future of free expression. It should sound an alarm for anyone who believes in democracy and open political systems that such decisive influence can be concentrated in so few hands.
The fact that a social medium can wield such influence over the political attitudes of citizens in open societies is a direct threat to all democracies. The free flow of information is the very precondition for such governance—it is the foundation upon which the pillars of democracy are built. Without that base, the entire structure collapses.
The algorithms
Today, algorithms—secret ones, to varying degrees—determine what posts appear in the feeds of users on social media and other information-sharing platforms. The algorithms of Facebook, YouTube, and TikTok are trade secrets—closed-source or proprietary software—to which no one outside the companies themselves has access.
X, formerly known as Twitter, is slightly better. The X algorithm is partially open source, with particular focus on the code governing the recommendation system for the ‘For You’ timeline. But problems remain: this open codebase is incomplete, lacking regular updates, which has led to questions about whether the publicly available code truly reflects the algorithm currently in use.
What can be misused, will be misused
Even if foreign social media companies are for now non-hostile actors primarily seeking profit and tailoring their algorithms accordingly, it is obvious that they could be abused for real-political purposes. By simple adjustments, the algorithms could easily be used to subtly manipulate users’ political opinions—to the benefit of those who control them.
The very fact that algorithms could potentially be used for such malicious activities is a serious problem, even if those who control them have no such intent at present. The mere existence of such vulnerability is problematic in itself. It makes our societies susceptible to foreign influence operations that do not have our best interests at heart.
European countries, which control none of the major social media platforms but are among their largest users, are therefore highly vulnerable to manipulation of public opinion. Our current laws protect the algorithms of social media as corporate trade secrets. Thus, under today’s legal framework, potentially hostile actors could in principle exploit loopholes in our legal systems to spread harmful information and pure propaganda—without us being able to prevent it.
A national security risk
In the worst case, such algorithms, based on closed code, can pose a national security risk, as they can be used to deliberately polarise our societies. Given that European countries are becoming increasingly heterogeneous, it would be easy to tailor users’ timelines or content feeds with the deliberate intent of setting groups against each other and sowing division to incite political unrest.
As the TikTok example from the United States shows, it does not take long to shape public opinion through targeted algorithms—it may be a matter of just a few years. And once the cat is out of the bag, it can be extremely difficult—perhaps impossible—to put it back in, with potentially catastrophic consequences for our open societies.
Even if these vulnerabilities are not immediately exploited, this remains an unacceptable latent political weakness that could, in principle, paralyze our countries at the click of a mouse. This is simply unacceptable, and our politicians must act—even if the cost is high.
A solution
Fortunately, there is a solution—but it will require courage to stand up to extremely powerful business interests—and likely to both the United States and China. Paradoxically, the two arch-rivals may agree to preserve the status quo, since they dominate the social media market and because these platforms can now be used as instruments of realpolitik.
All European countries (and any others that wish to join) must demand that the algorithms of major social media companies be made publicly available through open source, allowing real-time scrutiny. United together, Europe constitutes a geo-economic power that cannot be ignored or intimidated, since these companies have hundreds of millions of users on the continent.
Such a solution would hopefully lead to self-regulating algorithms that are not used for political propaganda, as tech-savvy observers could step in and inspect the code whenever something seems off. In addition, a pan-European political body could be established to monitor these algorithms and regularly test whether certain political narratives are being deliberately promoted by those controlling them.
Critics of this proposal will likely claim it constitutes an attack on free speech—but in fact, the opposite is true. Allowing external actors—with political and economic interests that do not necessarily align with our own—to control the media narrative through algorithmic control directly undermines free expression.
Should it really be the case that the boardrooms of powerful tech corporations in Silicon Valley or Beijing should, in principle, have veto power over which political, economic, and cultural issues see the light of day and reach public debate?
Photo: Dreamstime.