How AIs understand words

Published

Researchers at EPFL have created a mathematical model that helps explain how breaking language into sequences makes modern AI like chatbots so good at understanding and using words.

There is no doubt that AI technology is dominating our world today. Progress seems to be moving in leaps and bounds, especially focused on large language models (LLMs) like chatGPT.

But how do they work? LLMs are made up of neural networks that process long sequences of “tokens.” Each token is typically a word or part of a word and is represented by a list of hundreds or thousands of numbers — what researchers call a “high-dimensional vector.” This list captures the word’s meaning and how it’s used.

For example, the word “cat” might become a list like [0.15, -0.22, 0.47, …, 0.09], while “dog” is encoded in a similar way but with its own unique numbers. Words with similar meanings get similar lists, so the LLM can recognize that “cat” and “dog” are more alike than “cat” and “banana.”

A black box, even for experts

Processing language as sequences of these vectors is clearly effective, but, ironically, we don’t really understand why. Simple mathematical models for long sequences of these high-dimensional tokens are still mostly unexplored.

This leaves a gap in our understanding: why does this approach work so well, and what makes it fundamentally different from older methods? Why is it better to present data to neural networks as sequences of high-dimensional tokens rather than as a single, long list of numbers? While today’s AI can write stories or answer questions impressively, the inner workings that make this possible are still a black box—even for experts.

Now, a team of scientists led by Lenka Zdeborová at EPFL has built the simplest possible mathematical model that still captures the heart of learning from tokens as LLMs do. Their model, called Bilinear Sequence Regression (BSR), strips away the complexity of real-world AI but keeps some of its essential structure and acts as a “theoretical playground” for studying how AI models learn from sequences.

How does BSR work? Imagine a sentence where you can turn each word into a list of numbers that captures its meaning — just like LLMs do. You line these lists up into a table, with one row per word. This table keeps track of the whole sequence and all the details packed into each word.

A clear mathematical benchmark

Instead of processing all the information at once like older AI models, BSR looks at the rows of the table in one way and at the column in another. The model then uses this information to predict a single outcome, such as the sentiment of the sentence.

The power of BSR is that it is simple enough to be fully solved with mathematics. This lets researchers see exactly when sequence-based learning starts to work, and how much data is needed for a model to reliably learn from patterns in sequences.

BSR sheds light on why we get better results using a sequence of embeddings than flattening all the data into one big vector. The model revealed sharp thresholds where learning jumps from useless to effective once it “sees” enough examples.

This research offers a new lens for understanding the inner workings of large language models. By solving BSR exactly, the team provides a clear mathematical benchmark that takes a step toward a theory that can guide the design of future AI systems. These insights could help scientists build models that are simpler, more efficient, and possibly more transparent.

Other contributors
ETH Zurich
Università Bocconi

Funding
Swiss National Science Foundation

References
Vittorio Erba, Emanuele Troiani, Luca Biggio, Antoine Maillard, Lenka Zdeborová. Bilinear Sequence Regression: A Model for Learning from Long Sequences of High-dimensional Tokens. PRX 16 June 2025. DOI: 10.1103/l4p2-vrxt

Author: Nik Papageorgiou
Source: EPFL

Share

You might be also interested in

Interfacing the nervous system for rehabilitation

Memory loss, tremors, paralysis: when parts of the nervous system start to break down – or get broken – the consequences for human health can be staggering. Can we fix the nervous system, and how are scientists approaching the problem? We take a deep dive into various strategies for interfacing with the nervous system to restore neuronal function.

(more…)

Studying collective bee behavior thanks to robotics

EPFL researchers are developing robotic beehive frames that help locate honey stores inside of beehives over time, without relying on cameras. The aim is to develop new observation tools to study honeybee behavior that better fit the bees’ natural way to occupy space compared to current methods.

(more…)

AI can’t see as well as humans, and how to fix it

A study from EPFL reveals why humans excel at recognizing objects from fragments while AI struggles, highlighting the critical role of contour integration in human vision.

(more…)