.png)
RAI Workshop Series – Tools or Testimony? Ambiguity in Justification for LLMs

.png)
The Responsible AI Workshop Series
As large language models (LLMs) like ChatGPT, Claude, and Gemini become deeply integrated into research, writing, and decision-making, questions around their epistemic status grow more urgent. Are these systems simply tools—or are they treated as if offering testimony?
This seminar explores the ambiguity at the heart of LLM use. These systems are non-agential and statistically driven, yet their fluent, assertive outputs often resemble expert contributions. They do not fit neatly into established categories like tools that extend human inference or sources of testimony that involve justification and accountability. As a result, users face a unique challenge: how to engage responsibly with systems that are persuasive but opaque.
Attendees will gain a framework for understanding this ambiguity and the demands it places on users. Drawing on virtue epistemology, the session will highlight key intellectual virtues—such as humility, autonomy, and epistemic justice—that can support responsible interpretation and evaluation of LLM output. The discussion will also explore the broader ethical stakes as generative AI is deployed across institutions and systems of trust.
Why attend?
Participants will leave with a deeper grasp of the epistemic and ethical complexities of using LLMs—and practical insight into how individuals and institutions can navigate this new terrain responsibly.
This workshop is brought to you by the Responsible AI Practice @ The Institute for Experiential AI as part of the RAI Workshop Series.
Want to hear about future Responsible AI events?
Sign up here to get updates on upcoming Responsible AI workshops and events.
Commentator
Keynote and Industry Speakers
Northeastern University Speakers
Agenda





.png)







%20circ.png)