Postdoctoral Fellows

The EPFL AI Center Postdoctoral Fellowship Programme brings together outstanding young researchers who collaborate with EPFL faculty across diverse scientific domains to advance the capabilities and applications of AI.

With a focus on excellence and collaborative research, our Fellows develop their own research agendas while contributing to the Center’s community and strategic initiatives.

Below, meet our Postdoctoral Fellows currently shaping the future of AI through the programme.

Orr Paradise

“Self-proving models: Generative AI with provable correctness guarantees”

Today’s AI agents are powerful but sometimes make mistakes or “hallucinate” incorrect information. This is especially concerning in engineering applications—from automated code generation to circuit design—where errors could cause system failures or security vulnerabilities.

My research develops “Self-Proving Models”: generative models that can formally prove their answers are correct. This approach builds on Interactive Proof systems, a well-studied and beautiful branch of computational complexity theory that has found important applications in blockchain verification and cryptography.

Think of it like a student who not only gives an answer but must convince a skeptical teacher through a back-and-forth dialogue. The AI generates an answer and then engages in multiple rounds of questions and responses with a verification algorithm, until the verifier is mathematically certain the answer is correct. Crucially, this provides worst-case guarantees: the verifier will reject any incorrect answer with high probability, no matter how clever the AI trying to deceive it.

As a theoretical computer scientist, I develop both the mathematical foundations and actual working models. We’ve already trained AI systems that can compute arithmetic operations and prove their correctness. By bridging rigorous mathematical proofs with flexible AI systems, we’re working toward provably correct artificial intelligence for domains where formal specifications exist.

EPFL Host Professors: Nicolas Flammarion, Thomas Bourgeat, Lenaïc Chizat, Viktor Kuncak.

Linus Bleistein

“Bridging the Data Divide: Active Learning Strategies for Multi-Modal Medical AI”

Multimodal healthcare data holds great promise for advancing modern standards of care through the use of artificial intelligence. However, algorithms trained on clinical data face two key challenges.

First, medical data is typically collected from a limited number of patients across multiple modalities along the care trajectory. This data is often affected by batch effects due to variations in data collection technologies and clinical practices, resulting in distribution shifts that undermine reliable inference. Second, algorithms operating in medical contexts must meet strict ethical and structural constraints. They need to be robust to distributional changes and missing data, respect fairness principles, and function within privacy-preserving federated frameworks.

In other words, high-quality data is scarce, and models face significant constraints in real-world medical settings. This makes transfer learning—reusing knowledge from related tasks—and active learning—selectively acquiring new informative data—especially attractive.

My research project, conducted jointly in the labs of Prof. Charlotte Bunne and Prof. Bart Deplancke, aims to develop efficient and trustworthy algorithms capable of learning from existing tasks and datasets, intelligently selecting new training examples, and applying these methods to novel precision healthcare problems.

EPFL Host Professors: Charlotte Bunne, Bart Deplancke.

To learn more about our Fellowship programme, please visit the AI Fellowship Programme webpage .

Share