May 19, 2021 / 1:00-2:00 p.m. ET
Expeditions in Experiential AI
Many have raised worries about the ways in which algorithmic decision-making and prediction systems are opaque or black box systems even for those with expertise about the relevant systems. At the same time, others are skeptical that this opacity is a problem, arguing that human decision-making systems are also opaque and we shouldn’t hold algorithms to an unfair double standard, that it’s not even clear what demands for transparency or interpretability amount to, or that the cost in transparency is trivial and worth the gains in accuracy we derive from such systems. In this talk we explore the foundations of the problem of algorithmic opacity, develop an approach for grounding and explaining when opacity is problematic and that answers the skeptical concerns above. This is joint work of the presenters and will be hosted via Zoom.
John Basl -
John Basl is an associate professor of philosophy at Northeastern. He works in moral philosophy and applied ethics, especially on the ethics of emerging technologies.
Jeff Behrends -
Jeff Behrends is a Lecturer in Philosophy at Harvard, and the Director of Ethics and Technology Initiatives at the Edmond J. Safra Center for Ethics. He also co-directs the Embedded EthiCS program, which integrates custom-designed ethics modules into existing Computer Science classes. His research addresses a range of issues in ethics, including moral epistemology, the metaphysics of practical reasons, and the ethics of AI.
David Gray Grant -
David Gray Grant is an Assistant Professor of Philosophy at the University of Texas at San Antonio and a Senior Research Fellow in Digital Ethics and Governance at the Jain Family Institute. His research focuses on ethical questions that arise when organizations use automated systems to make high-stakes decisions.