Building an AI Ethics Ecosystem: John Basl Explores Ethical Issues in AI and Data
How should we address critical ethical failures in AI?
Finding the answer to that question comes in two parts for John Basl, who serves as an associate professor of philosophy and religion at Northeastern University, associate director at The Ethics Institute at Northeastern University, and core faculty member at the Institute for Experiential AI.
Basl discussed his research around important issues in AI ethics as part of our new series exploring the global impact of Northeastern faculty members.
The Pursuit of an “AI Ethics Ecosystem”
Basl advocates for an "AI ethics ecosystem"— a coordinated framework that addresses ethical challenges in AI and big data.
“People want to spin up AI ethics education tools or build governance infrastructure or build technical tools that sort of solve an AI ethics problem,” Basl said. “But that's not really how we solve ethical problems in other domains of technology.”
Basl made the comparison to bioethics in healthcare. Bioethics has tools everyday practitioners use, governance to incentivize those tools, education on how to use those tools, and foundational research to solve novel problems. That information then gets translated into different parts of the ecosystem by bioethicists and interdisciplinary scholars.
“What we need in AI is that same kind of ethics ecosystem,” Basl explained. “A lot of what I do is advocate for and try to coordinate those components and say what those should look like if we really want to manage the problems of AI.”
Exploring Explainable AI
On the other end of Basl’s research is “explainable AI,” which helps users understand the purpose, impacts, and potential biases of AI models.
When a doctor uses a medical diagnostic tool, there’s a difference between information explained to the patient to understand a decision and information shared with a doctor to explain how the tool came up with a decision. For Basl, the same is true when it comes to AI tools: Why do these explanations matter, and who are they for?
“There’s lots of technical work on explainable AI, but there’s not enough work on the ethical foundations,” Basl said. “There are going to be very different answers to what explainable AI techniques should aim at if our goal is to explain decisions to the people who we’re making decisions about, or if we’re meant to explain them to decision makers.”
The Role of Philosophers and Ethicists in Responsible AI
A Northeastern alum, Basl credited his undergraduate studies as a major catalyst for his pursuit of philosophical research. That tenure spearheaded his return to the university in a full-time role, where he began work on theoretical research with practical applications.
That crossover is showcased in the Responsible AI Practice at the Institute for Experiential AI, which aims to help organizations navigate the ethical challenges presented by AI.
Basl called the Responsible AI Practice a valuable opportunity for organizations, praising the institute for working with industry to understand and mitigate AI ethics risks, develop new governance strategies, and respond to emerging regulations and best practices.
“Partnering with RAI Practice means you get people who understand the industry side and can work with you, but are also drawing on significant, meaningful ethics research,” Basl said. “This is a time when philosophers are really playing an important role, and so it's an opportunity to continue to help build that ethics ecosystem for AI. I'm excited to be a part of that.”
Talk with our experts to find out how Responsible AI Practice can help your organization.