Key Lessons Learned from Leading with AI, Responsibly
It's easy to say AI should be used responsibly, but how should organizations actually accomplish that? Where does responsibility ultimately lie? Does accountability imply legal oversight? How can a Responsible AI framework exist alongside innovation?
This month, the Institute for Experiential AI hosted Leading With AI, Responsibly—a workshop, business conference, and career fair dedicated to helping the world make better, more responsible use of AI. Representatives from Google, Fidelity, Stanford, McDonald’s, Intuit, T-Mobile, Harvard, and elsewhere offered unique perspectives on the importance of Responsible AI.
“Most of the interesting developments in AI have taken place in companies rather than academia. That's because the companies have the data, they have the problems, and they have the motivation to go after them. We think it is up to academia to reach out and figure out ways to work with businesses, with companies, and organizations to identify real-world challenges and use them as the catalyst for advancing our knowledge.”
- USAMA FAYYAD, EXECUTIVE DIRECTOR OF THE INSTITUTE FOR EXPERIENTIAL AI
“With human-centered AI we want AI that involves early engagement in multidisciplinary teams. A focus that’s too narrow on machine learning can maximize accuracy, but it doesn’t measure success very well. We want something that augments rather than replaces humans. And that changes the way that you define the problem, because you have to look at the effect on society. You have to take a broader approach to making a solution. Human-centered AI helps build trust and consensus with users. Even if you have a system that's performing great, if the users don't trust it, they're not going to use it.”
- PETER NORVIG, RESEARCHER AT GOOGLE AND EDUCATION FELLOW AT STANFORD HAI
“Gen AI feels like it emerged overnight, not because it did emerge overnight, but because it got mass-consumerized overnight. I talk to bankers all the time, and one banker recently made a comment that they had three companies that they were transacting and all transactions got put on hold because the acquirers wanted to figure out how AI was going to disrupt the businesses that they were acquiring. There is definitely a real fear. And I think we should take it seriously, but with the amount of data and the algorithms available, it can be transformative.”
- RUDINA SESERI, FOUNDER AND MANAGING PARTNER AT GLASSWING VENTURES
“We shouldn't lose sight that this is a transformative period in human history. This isn’t just about technology, it’s also about humans, and how humans and technology evolve together. Having a mission that's larger than one person or one idea is very critical when you're making major business decisions, because it aligns people to a target. The mission matters, and when you're talking about artificial intelligence the mission really matters.”
- ASHOK SRIVASTAVA, SENIOR VP AND CHIEF DATA OFFICER AT INTUIT
“Many years back, computers only understood ones and zeros. At that time, there were very few experts who could actually talk to computers. Then came the generation of high-level languages like C, Python, .NET. In the last 20 years, you saw a transformational change and now we have an app for everything. With Gen AI, English is a new programming language, and this technology is in the hands of the masses. Everyone has it, everyone is using it, so I see exponential growth in this space with Gen AI as a foundation.”
- MANISH WORLIKAR, HEADS THE CENTER OF EXCELLENCE (COE) FOR ARTIFICIAL INTELLIGENCE AND ADVANCED ANALYTICS, FIDELITY INSTITUTIONAL
“When AI started blowing up, we realized we needed to come together as a system and educate people from the CEO down to the crew members for two reasons. One is to educate them on what is and isn't AI and to use it responsibly. But also to reinforce the fact that AI's been around a long time and we're doing a bunch of stuff in AI already. In global payments we use machine learning to help us make 50 million decisions a month on whether or not a payment is fraudulent payment. We use Neural Radiance Fields to help us create 3D images in advertising. And we use synthetic data to help us with testing and learning.”
- MICHELLE GANSLE, VP OF GLOBAL INSIGHTS AND ANALYTICS AT MCDONALD'S
“How will machines know what we value if we don’t know ourselves? I want to remove the stigma that ethics is the thing slowing us down. The question is, what do we want with our humanity? Is it to make sure our kids grow and flourish whoever they are around the world in a planet that can sustain us for generations? Then let’s remove the constant barrage of GDP-centric pressure that has led us to the anthropocentric place where we are. Bringing nature into these conversations I sometimes feel embarrassed, but we need nature to live.”
- JOHN HAVENS, DIRECTOR OF EMERGING TECHNOLOGIES & STRATEGIC DEVELOPMENT AT IEEE STANDARDS ASSOCIATION
“Explainability requires us to take complex mathematical models and translate them for humans, but the humans we have to translate them to are not necessarily data scientists. They're not engineers. They're not technical. Some are from the legal world. Some are customers who want to know why your company did a certain thing. What large language models have done is create an obfuscation of explainability that is very concerning. How we think about it and how we address it will continue to be a challenge.”
- RONKE EKWENSI, VICE PRESIDENT, CHIEF DATA OFFICER AT T-MOBILE
“We think it is important to identify the essential elements of responsible AI frameworks and workflows so that we can define the standards of responsible AI practice. We don’t think there will be one framework to rule them all. There will always be a plurality in terms of how we embed responsible AI and ensure that it’s implemented as an integral part of the innovation process, but there should be a minimum standardization for any RAI framework that organizations use.”
- CANSU CANCA, DIRECTOR OF RESPONSIBLE AI PRACTICE AT THE INSTITUTE FOR EXPERIENTIAL AI
“During the workshop, there were many points of view that expanded my understanding of responsible AI — things other people were emphasizing that I hadn’t thought as much about — and other attendees said the same thing. It was a confirmation that when you put smart people from different backgrounds together in a room, good things happen. This was confirmed by plenty of positive feedback that we received after the workshop.”
- RICARDO BAEZA-YATES, DIRECTOR OF RESEARCH AT THE INSTITUTE FOR EXPERIENTIAL AI
Contact us to learn more about our research and partnerships or to get involved.