Can AI influence election outcomes?

Published

Artificial intelligence (AI) may be a weapon of mass disinformation, but a recent report has demonstrated that its impact thus far has been limited.

Voters in nearly 100 countries – including Taiwan, the US and Senegal – went to the polls this year, and AI was often used during the election campaigns. This technology, when used in malevolent ways such as through deepfakes and chatbots, erodes citizens’ trust in the information provided by news outlets, whether on TV, online or in social media. AI-driven programs have clearly affected the reliability of the information we receive, but has that had an impact on election outcomes? A team of researchers at the EPFL-based Initiative for Media Innovation (IMI) conducted a study to investigate the influence that AI had on elections around the world in 2024. The findings appear in the first issue of IMI’s Décryptage magazine (in French only). The issue was written by Swiss journalist Gilles Labarthe in association with Mounir Krichane and Julie Schüpbach, both at IMI, and Christophe Shenk, chair of IMI’s Scientific Committee and head of digital news coordination at Swiss broadcasting company RTS.

Resurrected political figures

The researchers worked with local experts to analyze the various election campaigns and results. They found that AI-driven programs had only a marginal impact and didn’t swing the elections one way or the other. However, the study did find that the spread of manipulated content, boosted by algorithms, divided political opinion further and created a widespread climate of mistrust. For example, deepfakes – videos that have been digitally altered so that they appear to display actual people – were used in election campaigns in both the US and Switzerland. Meanwhile, generative AI was taken to a whole new level in India and Indonesia, where programmers brought political figures back from the dead by creating avatars intended to sway voters.

“Technology on its own won’t be enough,” he says. Human users are the weak link.

– Touradj Ebrahimi, Head of EPFL’s Multimedia Signal Processing Group

The authors of the study stress that the use of digitally manipulated content for propaganda purposes is nothing new; AI has only amplified this practice. The large-scale production and rapid dissemination of fake content – whether in video, image or text format – during election campaigns have undermined citizens’ trust. The authors also point to a regulatory vacuum that has enabled such content to circulate freely.

In an interview for the magazine, Prof. Touradj Ebrahimi, head of EPFL’s Multimedia Signal Processing Group, says that deepfakes are creating unprecedented technical, societal and ethical challenges. “It’s a game of cat and mouse between the creators of AI technology to generate deepfakes and the developers of software to detect them.” His research group is working to develop systems for identifying and limiting the dissemination of manipulated content (see below).

A collective effort

The IMI magazine provides a sweeping view of the risks that AI poses for election campaigns. It also gives concrete recommendations from scientists, other experts and media professionals for reducing the impact of disinformation, and suggests actions citizens can take. One recommendation is to implement fake-content detection and tracing systems, like the ones being developed by Ebrahimi’s group.

The magazine highlights the importance of introducing international regulations and of holding the media accountable. For his part, Ebrahimi says it will be essential to encourage collaborative fact-checking and promote education as a powerful ally in the fight against disinformation. “Technology on its own won’t be enough,” he says. “Human users are the weak link – we’ve got to make them aware of the risks associated with fake news and give them resources for verifying the sources of the information they receive.”

Finally, the magazine underscores the crucial role that governments, businesses and civil society can play in making the digital space both ethical and secure. This will require a collective effort to restore trust in the democratic process as AI becomes ever more prevalent.

Winning the fight against disinformation will require not just developing the right technology, but also – as the IMI magazine explains – a concerted effort among scientists and engineers, governments, businesses and citizens. Together, we can make information reliable again and restore trust in the democratic process.

EPFL’s unique expertise in combating manipulated content
At Prof. Ebrahimi’s Multimedia Signal Processing Group, engineers are working to develop technology that can effectively detect and stem the spread of manipulated content. This includes implementing the JPEG Trust standard so that the authenticity of images can be verified from the time they’re created until they’re published.
“There’s no magic bullet,” says Ebrahimi. “Instead, we’ll need to combine several indicators in order to build trust and reduce the risks.” This proactive approach could entail adding digital signatures to content, for example, so that users can trace it and detect any unauthorized changes.
Ebrahimi’s group is also examining the use of generative adversarial networks (GANs), which are networks where two machine learning programs compete against each other – one produces fake content while the other aims to detect it. GANs can enhance the ability of detection technology to spot even the most sophisticated deepfakes, providing a valuable tool for online media outlets and other content platforms.

Encouraging innovation in digital media
IMI was founded in 2018 by public- and private-sector organizations to promote digital innovation in the media. Its members include EPFL, SRG SSR, Ringier and the Universities of Geneva, Lausanne and Neuchâtel. The initiative has the support of the Swiss Federal Office of Communications.

Author: Mélissa Anchisi

Source: EPFL

Share

You might be also interested in

Can we convince AI to answer harmful requests?

New research from EPFL demonstrates that even the most recent Large Language Models (LLMs), despite undergoing safety training, remain vulnerable to simple input manipulations that can cause them to behave in unintended or harmful ways.

(more…)

IC PhD graduate recognized with the ELLIS PhD Award 2024

EPFL’s Anastasiia Koloskova has won this year’s ELLIS PhD award, which recognizes European dissertations in the area of artificial intelligence and machine learning-related fields.

(more…)

AI-Powered Virtual Cell Could Become Biology’s Universal Simulator

More than 40 researchers from across the fields of AI and biology, including from EPFL, have set out their vision for AI-powered Virtual Cells, arguing that these have the potential to revolutionize the scientific process.

(more…)