Abhilasha Ravichander

Profile photo of Abhilasha Ravichander

Abhilasha Ravichander

/ɑ.bʰi.ˈla.ʃə/

I am a tenure-track faculty member at the Max Planck Institute for Software Systems, where I lead the QUEST NLP lab.

My research focuses on building trustworthy AI systems by: (1) shedding light on how they work by understanding how pretraining and posttraining data shape downstream behavior, (2) understanding why reasoning and factuality fail and how to meaningfully advance these capabilities, and (3) developing technological interventions to increase human agency in the AI development pipeline. More broadly, my goal is to build responsible AI systems that amplify our ability to create, think, and discover. My work has been recognized with the ACL Outstanding Paper Award, ACL Best Resource Paper Award, ACL Best Theme Paper Award, and Rising Star recognitions in EECS, Data Science, and Generative AI. Previously, I was a postdoctoral scholar at the Paul G. Allen Center for Computer Science and Engineering at the University of Washington, and the Allen Institute for AI. I received my PhD from Carnegie Mellon University in 2022.

💫 I am recruiting interns for Fall 2026/Spring 2027! Please see this page for more details.

Research Interests

I am excited by language technologies that can help people understand the world and create new knowledge, while remaining grounded and transparent. Here are some of the areas our group is exploring:

Data-Centric Interpretability

Data is one of the key components of modern AI training pipelines. We are interested in developing a principled understanding of how training data is shaping downstream model behavior, and how this can result in more effective design of large language models. We are also interested in advancing technologies that empower data contributors to exercise meaningful agency over their contributions.

AI Creativity

We are interested in building models and evaluations for creativity and creative reasoning, moving beyond models that have been fitted to the mode of their training data distributions, toward systems that can expand human thought and uncover new possibilities.

Reliable Information Synthesis

We are interested in building AI-powered information systems that help people find accurate, trustworthy, and contextually useful information in real-world settings. As AI agents increasingly mediate how people search, synthesize, reason, and act on information, we aim to advance the design of these systems, while also studying their associated social considerations.

For more about our group's work, see my publications or visit the lab page.

Recent Media and Talks