News
AI Event

The Business of Responsible AI: Leaders from Google, Intuit, Harvard, Fidelity and more Share Their Experience

By
No items found.
October 25, 2023
Share this post
The Business of Responsible AI: Leaders from Google, Intuit, Harvard, Fidelity and more Share Their Experience

Most executives agree AI should be used ethically and responsibly. Saying so is easy and it costs nothing. More challenging is how to implement a Responsible AI (RAI) framework that not only supports the product and innovation cycle—but advances it, too.

At Leading With AI, Responsibly, our annual AI business conference, AI experts and business leaders came together to demonstrate how, through collaborations between industry and academia, RAI provides the model for AI done right.

With representatives from a diverse cast of universities and institutions—including Google, Fidelity, Stanford, McDonald’s, Intuit, T-Mobile, Verizon, Harvard, and IEEE—the event showcased the best of the bridge-spanning, cross-disciplinary ethos that has defined the Institute for Experiential AI since its founding two years ago.

The Relevant Data

The timing couldn’t be better. With new generative AI models hitting the market, rising stress about impacts on the job market, and the growing impression of a “hype bubble,” the time is right for a sober assessment of the ethical challenges behind a technology often described as “transformational.” It stands to reason that such a challenge cannot be fully addressed while sequestered within the walls of academia or the quarter-to-quarter stakes of private industry. Both are needed.

“In order to advance AI research, we need to focus on advancing some real-world challenges,” said Usama Fayyard, executive director of the Institute for Experiential AI, in his opening remarks. “The choice we made was to go after those challenges by partnering with the organizations that have the relevant data.”

Such partnerships are a critical piece of the overall research vision for the Institute and its Responsible AI framework. As Fayyad explained, many of the most interesting developments in AI have taken place in companies rather than academia, and that’s precisely because companies are the ones with the relevant data (and the relevant problems).

On the exploratory or theoretical side of AI, academia plays a critical role in supplying industry partners with the guidance and expertise needed to steer clear of immense ethical pitfalls. Cansu Canca, director of the Responsible AI practice at the Institute, put it thus:

“What we have been doing in the Institute is to bridge the gap [between industry and academia]. We have companies that come to us with their problems, and we have the technical knowledge to find solutions. We work with industry to analyze their AI systems and to put in place organizational structures where they have guidelines to ethically analyze their models and systems."

Human-Centered

This collaborative sentiment has been echoed by some of the most exciting names in AI, including Peter Norvig, Google researcher and fellow at the Stanford HAI (Human-Centered Artificial Intelligence).

“Why do I care about human-centered approaches to AI?” Norvig asked in a keynote titled What is Human-Centered AI? “It addresses the real goals of wanting to help people. A focus that’s too narrow on machine learning can maximize accuracy, but it doesn’t measure success very well.”

Similarly, Ashok Srivastava, senior vice president and chief data officer at Intuit, explained precisely how the “human in the loop” approach to AI excellence is core to how his company makes use of critical technologies.

“We’re taking AI and allowing that expertise to come to our consumers and small businesses, but also with human expertise,” Srivastava said. “We employ tens of thousands of human experts who are augmented by AI. It’s not a replacement strategy. The idea is to use AI to allow humans to do what they do best.”

At any conference on AI, you’ll hear comments (and concerns) about the potential for AI to “replace” humans. While those fears are not unfounded, it may be reassuring to hear from business leaders who are able to frame the massive step change that is AI in a human context.

“Technology has the potential to do great good or great bad, but at the end of that technology are human beings,” said Michelle Gansle, vice president, global insights and analytics at McDonald’s. Sharing old interview clips from the likes of Steve Jobs and David Bowie, Michelle argued that the goal of technologies like AI should be to free up people to do more impactful and meaningful work.

What About the Planet?

Now, whether that has come to pass since the early days of AI is up for debate. Certainly, there were participants at the conference in Boston who would advance an even more open-minded, even ecological vision of AI that balances the interests not only of humans, but of the planet, too.

“How will machines know what we value if we don’t know ourselves?” asked John Havens, director of emerging technologies and strategic development at the IEEE Standards Association, trying to push back against the idea that ethics are the people who are slowing us down. "We have to prioritize ecological flourishing and human wellbeing first. It’s irresponsible to build a technology and then say, ‘What about the planet?’ Because you end up missing so many things.”

In his closing remarks, Usama Fayyad highlighted the recent fuss over the size of large language models as an example of running headlong into a technology under an assumption that bigger is better. All that does is drive up bills and create dependencies. Core to a more sustainable vision for AI is the role of the university, which has room to breathe and explore the art and science and future of AI.

“As we all rush into this world of AI, I see a lot of rushing without thinking, and it worries me,” Fayyad said. “But there is a way to do it systematically and correctly.


Find out how we can solve AI problems, together.