At the Institute's Leading With AI Responsibly conference, the industry pioneer made the case that businesses should focus on AI’s broad societal impact as they develop products.
Peter Norvig literally wrote the book on artificial intelligence. The California native is the co-author of one of the most popular textbooks on the subject, “Artificial Intelligence: A Modern Approach,” which has been used in more than 1,500 universities in 135 countries.
Now, as companies in every industry rush to adopt the technology, the industry pioneer is calling for organizations to adopt a human-centered approach to AI, arguing the process is better for companies’ bottom lines as well as for society more broadly.
Speaking to a crowd of business executives and AI experts at our Leading with AI, Responsibly conference, Norvig laid out a compelling case for building AI tools in a more holistic way that accounts for the technology’s limitations.
Norvig, who is a distinguished education fellow at Stanford’s Human Centered AI Institute and a researcher at Google, incorporated ethical considerations into a presentation that was mainly pragmatic: Organizations that focus too narrowly on metrics like model accuracy may lose sight of the ultimate goal of building solutions that actually deliver value.
“Why do I care about human-centered AI?” Norvig asked. “Because it addresses the real goals of helping people.”
A better framework
Norvig says AI is human-centered if it:
Incorporates early involvement by multidisciplinary teams
Considers the impact, benefit, and harms to all people involved
Gives appropriate control to humans
Augments rather than replaces humans
Is transparent, accountable, secure, privacy-preserving, and fair
While traditional approaches to AI may only consider users, human-centered AI considers users, outside stakeholders, and society. To illustrate the difference, Norvig described the process for developing self-driving car technology. Traditional approaches to AI might only take passengers into account. Human-centered AI would consider passengers, pedestrians, and other drivers as well as the technology’s impact on traffic patterns, unemployment, and the city’s urban sprawl.
Norvig’s talk included a number of real-world case studies on what can go wrong when companies ignore the principles of human-centered AI. He also told the story of the COMPAS algorithm that aimed to help judges predict recidivism when determining if a person should receive parole. Pro Republica looked at the system’s performance data and determined that black defendants were twice as likely as whites to be labeled a higher risk but not actually re-offend.
The group that built the system was focused on accuracy, Norvig said, rather than the well-being of everyone involved. As a result, they overlooked the harm of false positives. One reason for this, Norvig believes, is the group failed to bring in multidisciplinary teams early on.
“Critics said [the group] did all this work over years, and maybe their underlying assumptions were wrong,” Norvig said. “They could have avoided that by having enough stakeholders involved in the beginning to say there are other ways to look at this.”
The benefits of human-centered AI
Norvig says multidisciplinary teams help organizations identify more use cases, gaps in their data, and vulnerabilities. They also help the system generalize and provide better alignment with technical, business, and social goals.
“I think about it like zeroing in from multiple directions,” Norvig explained. “If there isn’t one metric that will give you the right thing, are there four or five different ones that I can combine?”
Norvig also argued human-centered AI helps build trust.
“When you take a broader approach to making these solutions, they’ll be better accepted,” Norvig said. “If you just focus on the user, you’re apt to run into resentment from other people saying, ‘Why do they have that? What about me?’”
For researchers, Norvig added another incentive for working on human-centered AI: The area is less developed, so it’s easier to make progress.
Shifting views, evolving technologies
The presentation was one of several from experts at the conference who have worked in the industry long enough to see through the periodic hype cycles. It aligned strongly with the mission of the Institute for Experiential AI, which believes the best way to solve problems, both scientifically and in practice, is through a highly applied, human-centric approach with a human-in-the-loop throughout the feedback process.
Norvig’s talk was the culmination of an evolution in his thinking about AI, which he said started with focusing on algorithms before moving to data, and now to objectives.
“I’m embarrassed to say in 1995, we said, ‘Your professor or your boss is going to tell you what to maximize, and then we’re going to tell you how to do it,’” Norvig explained. “That was the wrong approach. Really we have to say, “Figuring out what to do is the hard part.’”
Interested in learning more about the conference? Read a complete recap of the day’s events, get lessons learned, and stay tuned for more coverage from the Institute for Experiential AI’s flagship event!