Abhilasha Ravichander

Profile photo of Abhilasha Ravichander

Abhilasha Ravichander

/ɑ.bʰi.ˈla.ʃə/


Update: I will be joining the Max Planck Institute for Software Systems as an Assistant Professor starting Fall 2025! I will be recruiting Ph.D. students and interns. Please see this page for more details.

I am a postdoctoral scholar at the Paul G. Allen Center for Computer Science and Engineering at the University of Washington, where I work with Yejin Choi. I received my PhD from Carnegie Mellon University in 2022.
My research is dedicated to making AI models trustworthy. I develop frameworks to ensure large language models (LLMs) are factual, transparent, and robust. My work focuses on three key areas:

Understanding Large Language Models: Developing a scientific understanding of how large language models operate, and the principles that govern their behavior.
Enabling Transparency and Control: Leveraging this understanding to construct new frameworks that provide greater transparency and give users more control over AI, including control over their data and how their data is used.
Developing User-Centric AI: Fostering an understanding of the expectations and needs of the users of AI systems, in order to create AI that is responsible, high-performing, and aligned with user goals.

For more about my work, please see my publications.

What's New

🎤 Talk at TU Darmstadt.
🎤 Talk at the University of Mannheim.
🏆 HALoGEN won the outstanding paper award at ACL 2025 🎉
💫 Extremely excited to be joining the Max Planck Institute for Software Systems as an Assistant Professor in Fall 2025!
🎤 I was on the Women in AI Research podcast, to talk about LLM hallucinations and data-centric AI.
🏆 HALoGEN won a best paper award at the TrustNLP workshop at NAACL 2025 🎉
🎤 I am speaking in a panel on "Navigating Research in the Age of LLMs" at the Widening NLP workshop at EMNLP 2024.
⭐ I am at the "Rising Stars in Generative AI" workshop at UMass Amherst.
🏆 OLMo won the ACL 2024 Best Theme Paper Award 🎉
🏆 Dolma won the ACL 2024 Best Resource Paper Award 🎉
🏆 Artifacts or Abduction? won a best paper award at MASC-SLL 2024 🎉
📚 I am co-organizing the Workshop on Privacy in Natural Language Processing @ACL 2024
📚 I co-organized the Workshop on Representation Learning for NLP @ ACL 2023.
🎤 I was on the Minds Matter podcast, to talk about AI!
🎤 Talk at the National University of Singapore.
🎤 Talk at UMass NLP.
⭐ I am at the Rising Stars in EECS workshop at the University of Texas at Austin.
🏆 CondaQA won a best paper award at the 2022 SoCal NLP symposium 🎉
Older news.