“AI, good servant, bad master”

Published
10 February, 2026

AI is making its way into every aspect of our lives, but how ready are we to adopt it, and under what conditions? A report published in French by the EPFL AI Center, in collaboration with the University of Geneva, presents both the perceptions of a sample of French-speaking population of Switzerland and the recommendations of a citizen’s assembly.

Massive use, despite serious concerns about cyberattacks, deepfakes, and privacy: this is the paradox that emerged from a citizens’ assembly asked to reflect on their attitudes towards artificial intelligence. Initiated by the EPFL AI Center, in collaboration with the Swiss Research Center on Democratic Innovations at the University of Geneva and the Demoscan association, this unprecedented process has resulted in the publication of its final report in French and German. It contains 20 concrete proposals to regulate and support the deployment of AI. In short: AI is a good servant but a bad master.

“AI is one of the most significant technological transformations of our time. Its rapid development affects work, health, education, privacy and democratic life” says Marcel Salathé, co-director of the EPFL AI Center. “Given the scale of these issues, the report reminds us that it is essential for citizens to be able to express themselves and help shape the future of this technology, rather than simply suffering from its effects.”

A two-step approach

This document is the result of two complementary initiatives. First, a survey supported by the Swiss Federal Statistical Office collected the views of 734 residents from the French-speaking regions of Switzerland. The questionnaire focused mainly on the uses and public perceptions of AI.

Among the respondents who expressed interest, 40 citizens were selected to form a diverse panel (canton, age, education level, political interest). The assembly then met over two weekends in November to discuss, debate and deliberate. The process was designed and conducted by the Demoscan association which oversaw the methodology and facilitation, guided by a central principle of neutrality, fostering an informed and balanced discussion.

“Democracy is not limited to the ballot box. A citizens’ assembly is a mechanism that transforms intuitive opinion into reflective judgment: participants have enough time, receive relevant information, work within a neutral framework for debate, and produce argued proposals” says Nenad Stojanović, Professor at the University of Geneva and co-founder of Demoscan.

Widespread adoption but strong expectations

The survey shows that AI has already become mainstream: 87% of respondents have used at least one AI tool, and ChatGPT is by far the most common (70%). But this adoption comes with strong expectations around transparency and regulation. Nine in ten respondents believe systems should clearly indicate when users are interacting with a machine, while 70% want public authorities to strictly regulate AI development.

The most pressing fears relate to malicious use. More than 80% cite hacking and cyberattacks as a major risk, while 77% worry about deepfakes and misinformation, reflecting growing anxiety about the erosion of trust in digital content. Privacy comes next (65%), followed by the impact on jobs (59%). Overall, nearly 69% see AI as a serious threat to privacy and data security.

Who should govern AI?

When asked who should take the lead in governing AI, about a third of respondents point to the Swiss government (31%). Yet nearly as many say they don’t know (27%), highlighting uncertainty around institutional responsibility.

In terms of political priorities, data protection stands out clearly: 68% rank it as the top issue, ahead of ethical guidelines and transparency (41%), and the prevention of uncontrolled AI (40%).

The report is intended to inform academic, institutional, and political debates by grounding them in public expectations. In the preface, participants’ convey a central message: AI systems should not be allowed to make decisions autonomously in ways that weaken individual choice or create dependency.

Salathé hopes the process can now be scaled up. “The next step could be to extend this initiative across Switzerland, so that AI governance is informed by citizen recommendations in all language regions,” he says.

20 proposals, structured around five themes
At the end of the deliberations, the citizens’ assembly produced 20 proposals grouped into five broad areas:

1. Responsible practices: establishing dedicated legislation and educational tools to improve data protection and reduce cyber risks.
2. The role of the State: including the creation of a Federal Office for AI to secure long-term research funding.
3. Access and education: strengthening public awareness of the risks of generative AI and encouraging social interaction to avoid an “all-AI” society.
4. The world of work: preparing for economic and personal impacts of job loss or job transformation, including support for retraining.
5. Traceability: introducing labeling to identify and promote human-made content, alongside stronger copyright protections.

The report is available both in French and German.

This project was supported by the EPFL AI Center and Stiftung Mercator Schweiz. 

Author: Melissa Anchisi

Share

You might be also interested in

A New Reference Model for Machine-Learning–Driven Materials Discovery

Researchers at EPFL’s Laboratory of Computational Science and Modeling (COSMO) have reached a significant milestone in material science, reaching the top position on Matbench Discovery, the leading benchmarking platform for machine-learning interatomic potentials.

(more…)

New AI system pushes the time limits of generative video

A team of EPFL researchers has taken a major step towards resolving the problem of drift in generative video, which is what causes sequences to become incoherent after a handful of seconds. Their breakthrough paves the way to AI videos with no time constraints.

(more…)

AI enables a Who’s Who of brown bears in Alaska

A team of scientists from EPFL and Alaska Pacific University has developed an AI program that can recognize individual bears in the wild, despite the substantial changes that occur in their appearance over the summer season. This breakthrough holds significant promise for research, management, and conservation efforts.

(more…)