Skip to content
Home » The AIs Have It: Will Disinformation Erode Democracy In 2024?

The AIs Have It: Will Disinformation Erode Democracy In 2024?

“Next year is being labelled the ‘Year of Democracy’,” said Marietje Schaake in the Financial Times. 

A series of key elections are scheduled to take place in the UK, US, EU, India, Indonesia and potentially Ukraine. And artificial intelligence is “one of the wild cards that may well play a decisive role” in the votes, wrote Schaake, policy director at Stanford University’s Cyber Policy Center.

In a speech at the Royal Society in London today, Rishi Sunak said that although he was “unashamedly optimistic” about the power of technology to improve life, “it also brings dangers and new fears”. 

Subscribe to The Week The Week provides readers with a wide range of perspectives from 200 trusted news sources.

Try 6 Free Issues

Sign up for The Week’s Free Newsletters From our daily WeekDay news briefing to an award-winning Food & Drink email, get the best of The Week delivered directly to your inbox.

From our daily WeekDay news briefing to an award-winning Food & Drink email, get the best of The Week delivered directly to your inbox.

An accompanying report from the Government Office for Science warned that by 2025, generative AI “could be used to commit fraud and mount cyberattacks”, said The Times. The more severe risks that could emerge by 2030 include “mass disinformation”. AI could lead to the “erosion of trust in information”, with “hyper-realistic bots” and “deepfakes” muddying the waters, said the report. 

The analysis of the risks, which include assessment by the UK intelligence agencies, “provide a stark warning”, Sunak said.

The prime minister’s speech, said the BBC, “sets the scene” for the world’s first AI safety summit next week. The UK-hosted event, at Bletchley Park, follows a warning from the EU’s cybersecurity agency ENISA about a recent surge in AI chatbots and deepfake images and videos that could threaten the bloc’s parliamentary election next year.

The government “has published a trio of documents saying artificial intelligence will take us to hell in a handcart with sabotaged elections, collapse in public trust, sextortion, bioweapons and societal unrest”, said Politico’s London Playbook newsletter. “Or it won’t. Who knows?”

Generative AI, “which makes synthetic texts, videos and voice messages easy to produce and difficult to distinguish from human-generated content, has been embraced by some political campaign teams”, said Schaake in the FT. However, “while much of generative AI’s impact on elections is still being studied, what is known does not reassure.”

Many social media companies have laid off teams who dealt with disinformation, and YouTube has said it will no longer remove “content that advances false claims” about past US elections. Elon Musk has “gutted” Twitter’s trust and safety teams. “Right when defence barriers are needed the most, they are being taken down.”

Politics “has always been stalked by propaganda”, said The Economist. Disinformation “was already a problem in democracies” without AI – like the “corrosive” claim that the 2020 US election was rigged, which “was spread by Donald Trump, Republican elites and conservative mass-media outlets using conventional means”. The fear now is that disinformation campaigns may be “supercharged” in 2024, just as countries with a collective population of about 4 billion prepare to vote.

Just days before the recent Slovakian election, fake audio recordings of Michal Šimečka, the leader of the Progressive Slovakia Party, were shared online, in which he was heard discussing plans to rig the ballot, said Politics Home’s “The House” magazine. “What made them notable was less the fact that they were faked, but how they were faked: most likely with free or cheap online tools, readily available to anybody with the inclination to cause a little chaos.” 

A similar occurrence with a fake audio clip of Labour leader Keir Starmer moved Conservative MP Simon Clarke to brand generative AI as “a new threat to democracy”, said Tom Phillips, former editor of fact-checking organisation Full Fact. As the UK “enters the long slog towards a general election, that’s a fear that will only grow”.

Although threats of disinformation, manipulated evidence and hoaxes aren’t new, AI “lets you do it far quicker, far cheaper and at an unprecedented scale”. 

But AI could also represent “a new era in political campaigning”, said global health policy expert Ade Adeyemi for Labour List. AI “is a force to be reckoned with”, and now is the time for political parties and their candidates to “embrace AI to make a deeper connection with voters”. 

There is a “golden opportunity for Labour to harness AI-driven technologies to its advantage” – although the party must tread carefully on the “ethical tightrope” the technology presents. 

What’s next?The prime minister announced that the UK would establish the world’s first AI safety institute. At the Bletchley Park summit, he will propose a global expert panel nominated by attendees to publish a “state of AI science” report.

There are possible steps to “prevent this new technology from causing unpleasant surprises in 2024”, noted Schaake, including independent audits for bias, research into disinformation efforts and the study of elections this year, including in Poland and Egypt.

But there are “reasons to believe AI is not about to wreck humanity’s 2,500-year-old experiment with democracy”, said The Economist. Although it is important to be mindful of the potential of AI to disrupt democracies, “panic is unwarranted”. Voters are hard to persuade, “especially on salient political issues such as whom they want to be president”.