Event Details
Registrations to date: 1
Hallucination in the Wild: A Field Guide for LLM Users
Spotting, Understanding, and Reducing AI Mistakes
Large Language Models like ChatGPT are incredibly good at sounding smart—even when they’re completely wrong. This tendency to produce false or misleading information, often called hallucination, is one of the most persistent challenges in modern AI.
In this talk, Ashley Lewis from OSU Linguistics will explain why these models hallucinate, what makes it difficult for them to recognize uncertainty, and why existing solutions often fall short. She’ll also share insights from her research on building smaller, more efficient models that make fewer mistakes, how we can evaluate their trustworthiness more effectively, and what practical strategies—like better prompting—can reduce hallucinations in the tools we use today.
Whether you use LLMs every day or just wonder how they work, this talk offers a behind-the-scenes look at one of AI’s most pressing problems.
About Our Speakers
![]() |
Ashley (Ash) Lewis is a PhD candidate in Computational Linguistics at The Ohio State University. Her research focuses on reducing hallucinations in large language models (LLMs) through efficient training methods like knowledge distillation and self-training, as well as developing better tools for evaluating model trustworthiness. She is currently building a virtual, document-grounded tour guide for the COSI science museum in Columbus, Ohio.
|