Within five years we may have AI that does science

Published
27 January, 2026

EPFL professor Robert West and invited professor Ágnes Horvát discuss how the rise of AI is transforming the dissemination and production of scientific knowledge.

Today, most academics share their research online. The public, journalists, and policymakers increasingly rely on digital media as a primary source of scientific information.

In a landscape where science is often misunderstood, politicized, or sensationalized, how do researchers best promote their work and how is the rise of AI transforming the dissemination and production of scientific knowledge?

EPFL associate professor Robert West, head of the Data Science Laboratory and Ágnes Horvát, associate professor of communication and computer science at Northwestern University, where she directs the Lab on Innovation, Networks, and Knowledge (LINK) sat down to discuss science communication in our digital age.

You have been increasingly interested in how science is communicated in online spaces. What is your observation?

Ágnes HorvátWe are focused on how information gets lost through that process and how misinformation creeps in. One of the things we are very concerned about is the way this content is sensationalized and overhyped in many ways. Another is misinformation, which is a large-scale problem that affects our entire news and information ecosystem. Finally, we are increasingly seeing AI taint this space.

If a large majority of people get their information from social networks and video channels, isn’t it crucial to utilize them to communicate science, yet at the same time clickbait is the method by which most content on these channels engages people so, it’s a bit of a double-edged sword, isn’t it?

Ágnes HorvátThat’s an excellent question. We can show for a seven-year period that there is a tangible gain from social media participation of scientists in terms of citations, which arguably are the traditional measure of success in science. Interestingly, the gain has been trending lower over the years, which is something to think about. Two key challenges in using social media to communicate science are the extreme compression in content and also today’s new AI landscape. We have looked at this in the past year, focusing on how abstracts in biomedical sciences changed in 2024, as opposed to before then and we found unmistakable traces of LLMs. We identified close to 500 words that give away LLM use and we can say that at least around 13% of articles went through LLM massaging.

Bob West: Which is funny because that number is roughly what we found for LLM written article reviews also. We did this for the 2024 International Conference on Learning Representations and at least 16% of the reviews were written at least with the help of LLMs, which creates this absurd scenario where you have AI writing papers that AI is then reviewing and then you have people that ask for AI summaries of the papers.

Ágnes HorvátI think the saddest thing for me is that we are so welcoming of these tools and I’m sure there is a sort of homogenization of ideas that we haven’t managed to quantify yet. I’m also concerned that these tools have a tendency towards certainty, because they just must give an answer and I think that’s a problem when abstracts, research papers or reviews sound more certain than they should. Not to mention that science communication is not only about facts but also about how those facts are presented, impacting which ideas sound appealing and what kind of future research gets done as a result. By surrendering some of this agency to LLMs, we are giving up on those choices to some extent without knowing the consequences.

Bob West: But it’s not so clear, it can go either way. The baseline is low because a lot of papers that humans write are badly written, even if they have good ideas. So, this is an example where AI could actually be an equalizer, rather than a catalyst of inequalities.

We know that there was a problem with misinformation before AI and today on social media platforms there’s very little moderation with misinformation spreading very quickly. Is AI furthering this challenge of misinformation?

Ágnes HorvátThe entire system is vulnerable as a lot of social media content is taken from other sources with unknown provenance. The one mechanism that is very clear is that AI can be quicker at producing any kind of content and if there are more bots producing misinformation, it proliferates more quickly.

Bob West: What we do know is that AI is very persuasive. So, when you prompt it to take a stance and defend that stance, it can do that at a level that is essentially superhuman, and so now you have a perfect propaganda machine that’s free. You used to have to pay spin doctors a lot of money to do this. The numbers that we compute running AI detection tools will typically vastly underestimate what’s really going on.

If you both had a crystal ball, and you could look ahead to the end of the decade, where would you see this challenge of communicating science evolving, both from a scientific perspective and a more public perspective?

Ágnes HorvátCurrently all the conversation is around how we present ideas that have been researched by people with AI. Maybe the AI helped write the work up, maybe it helped with the code, maybe it helped with data collection, literature review, whatnot. I think the next step, and perhaps five years is a reasonable time frame for that, is for AI to come up with the ideas we study. That’s very different territory because then the AI is providing the hypothesis that needs to be researched. That’s a new problem space, and it’s so much more complicated than everything else that we’ve seen so far.

Bob West: Exactly. I agree that it’s a very realistic scenario that within five years we’ll have AI that does science. Will we even be able to follow the science that the AI is doing at that point? That’s one of the reasons we turn to social media for science, because it’s kind of a filter also. What should we look at? What are the trends? AI doesn’t have that problem because it can just read all the papers.

If AI is coming up with the hypotheses and asking the questions, is it coming up with the right ones?

Ágnes HorvátWe used to think as humans that we want to have a say in what’s being studied and I don’t know how AI would negotiate values around what research is important for humanity’s future.

Bob West: And does AI care about what is important for humanity’s future in the first place? One of the hardest things in science is to know what questions to ask and I often struggle with this a lot. Why are we doing this? What should be done next? So, I think the question is not whether an AI can do this perfectly, but whether it can do it better than us. I’m not so pessimistic in the sense that AI can’t do it, which might be pessimistic at a higher level: what if it does it superficially better than us, but really it doesn’t care whether it matters for humanity’s better future? In five years, we’ll talk again!

Author: Tanya Petersen
Source: EPFL

Share

You might be also interested in

ETH Zurich, EPFL, and Stanford HAI forge a strategic collaboration on human-centered AI

The agreement lays the foundation for long-term collaboration in AI research and education, with a focus on open, large-scale foundation models and their societal impact. It will enable joint research projects, researcher exchanges, and new approaches to human–AI collaboration across disciplines.

(more…)

ELLIS Award and Google Fellowship spotlight AI research at EPFL

Maksym Andriushchenko, who received his PhD from the Theory of Machine Learning (TML) laboratory in 2024, received the ELLIS PhD Award, and Francesco D’Angelo, a current TML doctoral student was awarded a prestigious Google PhD Fellowship.

(more…)

Do we really need big data centers for AI?

EPFL researchers have developed new software – now spun-off into a start-up – that eliminates the need for data to be sent to third-party cloud services when AI is used to complete a task. This could challenge the business model of Big Tech.

(more…)