by Tyler Wells Lynch
The Inherent Value of Trustworthy AI panel at Discover Experiential AI. (Photo by Heratch Ekmekjian)
What does it mean to trust something? If you ask Helen Nissenbaum, professor and director of the Digital Life Initiative at Cornell Tech, she’ll tell you it has something to do with vulnerability. When you trust something, you have faith that the object of your trust will not harm you. Can we say the same about artificial intelligence (AI)? If you ask the guests who spoke on the last panel of the day at Discover Experiential AI, the answer is a resounding no.
Moderated by Northeastern Associate Professor of philosophy John Basl and the Institute for Experiential AI (EAI) Director of AI Faculty, Jennifer Dy, The Inherent Value of Trustworthy AI panel featured Helen Nissenbaum alongside EAI core members Tina Eliassi-Rad and Christo Wilson, and Cynthia Rudin, professor of computer science at Duke University.
Together, they sought to answer some of the toughest questions in AI ethics: Why are there so many biased algorithms? Who’s to blame when unjust outcomes occur? How should responsibility be portioned out? What lies at the heart of an unjust algorithm — is it the data or the people who collect it? To solve these problems, should we turn to public policy, the educational system, or the research community that birthed them?
Accountability, Transparency, Privacy
All the panelists seemed to agree on one thing: If we’re to create trustworthy AI, we need to incorporate our values into all stages of design, development, and deployment. This unprecedented action hinges on whether we can imbue human values into an inherently quantified system. So can it be done?
With their academic backgrounds, panelists were in a key position to critique the problem as it exists on campus. They argued that there’s just not enough accountability or professional norms in the academic community. Data Scientist Tina Eliassi-Rad blasted the careless attitude that exists among too many researchers when it comes to designing algorithms.
“There’s nothing, technically, in AI that will get you excommunicated,” she said. “There are no professional norms. In any other discipline, if you do certain things, you can get excommunicated. Not in AI.”
This lack of accountability allows systems to form with little to no oversight. In professional settings, where data scientists are rare or even non-existent, it should come as no surprise that algorithms are treated as black boxes: There are inputs and outputs, and the internal workings are for others to worry about.
In fact, that’s how most people think of AI, including the very court systems, hospitals, and private companies that use AI to make sense of their data. With his background in cybersecurity, Christo Wilson illustrated how this asymmetrical arrangement is all around us:
“They’re collecting a lot of data about us,” Wilson said. “They have a lot of power over us — the things we see online, the prices we pay, whether you get a job, whether you get a house, whether you get government benefits, whether you go to jail or are set free. We don’t know how those things work, and yet they increasingly govern our life.”
Is it any wonder that trust is so hard to come by?
Defining Trust
Professor Nissenbaum pointed to philosophical literature defining trust as a state of vulnerability between a subject and the object of their trust. In contexts involving trust, actors have earned a reputation for not being malicious or careless, such as doctor-patient or pilot-passenger relationships. There’s a kind of mutual self-interest, where the doctor or the pilot has a stake in the patient’s or passenger’s well-being.
But that doesn’t exist in AI, where the interests of an algorithm’s developers diverge considerably from those most affected by its deployments, such as patients, parolees, or customers.
“My concern with many of the actors who are developing these AI systems is that the factors that enable trustworthiness are simply not in place,” said Nissenbaum.
Wilson concurred, pointing out that the third-party designers of these systems are so far removed from their impact and often shielded from accountability. How can we expect trust to develop at such a distance?
The Path Forward
So it’s a problem of deficiency: Not enough accountability, not enough transparency, not enough interpretability. How can we introduce these measures to extend power in the other direction? For Cynthia Rudin, it begins with data. Too much of the data is bad, and still, more is missing.
“If data are not trustworthy,” she said, “what makes us think the models we build from that untrustworthy data can be trustworthy?”
By making data more interpretable, though, we can begin to democratize their implementation. Of course, that creates a bit of a computational challenge, where now you have to optimize algorithms for interpretability. “But,” Rudin said, “that’s one thing I think we can handle.”
For Wilson, we need to see more auditing of algorithms — that is, subjecting public algorithms to public scrutiny. That’s no simple task since private companies don’t generally want to share that information. And it’s not enough to simply audit an algorithm post-hoc. If a system is not designed with interpretability, fairness, or human values in mind, then there’s almost no chance it will stand up to ethical scrutiny. “Success in terms of trustworthiness starts at the point of conception,” he said.
Wilson also set his sights on the educational pipeline, where “we’re great at training engineers but not great at training engineers who can recognize and speak the language of ethics.” Moreover, our institutions are not designed to build bridges to disciplines that do speak those languages.
Professor Eliassi-Rad added that to build more trustworthy systems, we need to start young, training students at the earliest possible levels on how these algorithms work, how to deploy them, and how they square with our values.
“We need a multi-pronged solution,” she said. “Education is a piece of it. The law is a piece of it… We need professional norms in AI. We need to police ourselves.”
At its core, EAI believes in applying AI in ways that center on the human experience. That’s what it means to be “experiential.” It’s fitting that the last panel of the day would flip that notion on its head, using the term “experiential” to advocate on behalf of a more interdisciplinary approach to AI in all its applications — not just academic.
That means forming bridges between industry and academia, between research and practice, and between theory and application. It means designing systems where practitioners can see how their algorithms are used, where professionals can see how their tools are designed, and where citizens can scrutinize the systems that govern their environments. The cross-collaboration, transparency, and openness afforded by this approach is perhaps the only way to build trust in systems that, whether we like it or not, are here to stay.
To learn more about the Institute for Experiential AI, visit our website or contact us to see how we can help you reach your AI goals. You can also watch the replay of all our Discover Experiential AI festivities.