Thoughts on How McDonald’s and Fidelity Are Thinking About Generative AI

In this section of Institute for Experiential AI Executive Director Usama Fayyad’s takeaways from conversations with executives and leaders at VentureBeat Transform, Usama shares his thoughts on two sessions in which executives from two well-known companies unveiled their thinking about the potentially transformative potential of generative AI.

 

How McDonalds is leveraging generative AI 

In this session, speakers Joanna Lepore, head of global foresight at McDonald’s, and Zach Richard, senior director of data science, presented a nice set of approaches that are both practical and effective for implementing the technology on the right problems and contexts. Furthermore, Lepore spoke about the importance of training and education. It was a refreshing experience to see such an established brand take an early-adopter stance to evaluate and use the technology in marketing, communications, and understanding the market in their highly competitive space. I thought the conversation was rich with insights on implementing the technology on the right problems. I was also intrigued with how Lepore’s area is called foresight (rather than research or advanced technology) — I like this term because it implies both tech and strategy insights.

 

Leveraging Generative AI Responsibly at Fidelity

The fireside chat with Sarah Hoffman, Vice President of AI and Machine Learning Research at Fidelity Investments considered how to leverage generative AI responsibly for productivity. The discussion offered an informative and notable look at using generative AI responsibly in a highly regulated industry. Hoffman gave a practical and clear discussion about how enterprises should be thinking. It was a rare, balanced, and pragmatic view of the technology and its role in business.

The conversation was especially interesting because it overlaps with a lot of our work at the Institute. We have been partnering with financial services, fintech, and insurance companies to ethically implement AI through our Responsible AI (RAI) practice. Our process includes conducting a technical audit to assess model bias, security, accuracy, use of data, and more. From there, we help partners craft an ethics strategy for new AI projects and establish a model for AI governance. We have assembled an AI Ethics Advisory Board made up of world-class experts in AI ethics that work as an independent advisory board for companies. We also address the AI talent gap by facilitating student co-ops, providing expert consulting, and training to help partners upskill their employees. The goal is to help our partners better understand the risks and capitalize on the opportunities of AI responsibly. Watch our AI for Finance Leaders on-demand webinar for more expert insights.

 

Get more of my takeaways from the event around topics like using generative AI for customer service, how the C suite views generative AI, and the long, complex road toward trusted AI.

Latest Posts

Fighting Antimicrobial Resistance Research with Artificial Intelligence

Aligning interests in the fight against antibiotic resistance will require greater cooperation between industry and academia. Antimicrobial resistance is one of those civilizational perils that leaves little room for hope—a borderless, humanity-wide hazard whose central challenge is as much about human nature as it is about natural forces. And to attend a conference on antimicrobial […]

Harnessing AI to Turn Complexity Into Planet Saving Innovations

New AI for Climate and Sustainability (AI4CaS) focus area will use research, entrepreneurship, and training to fight climate change. The Earth’s climate is determined by an intricate web of ecosystems that interact across local and global scales. Turning that complexity into actionable insights, like building coastal resilience or predicting natural disasters, is where human-centered and […]

New Institute Collaboration Aims to Prepare Public Health Officials For The Next Pandemic

The Institute for Experiential AI (EAI) at Northeastern University is pleased to announce an innovative new collaboration with the Center for Advanced Preparedness and Threat Response Simulation (CAPTRS) to improve public health officials’ response to the next pandemic. Under the partnership, the institute is developing a large language model (LLM) that will simulate emerging global […]

Institute Researchers Develop a More Accessible Way to Conduct Social Media Research Using Volunteers

What role is social media playing in rising political polarization? How does social media activity affect mental health in teens? Does social media play a role in the proliferation of misinformation and conspiracy theories? Researchers trying to answer these questions often must first get access to data from social media companies. But those platforms have […]

World Suicide Prevention Day: Can AI Help Researchers Better Understand Factors that Contribute to Suicide?

  Studying mental illness poses a uniquely thorny challenge. Researchers have to rely on subjective reports and behavioral analysis to make diagnoses, but the idiosyncrasies of mental illness all but guarantee that the picture remains fuzzy. For suicide, the problem is even stickier, shrouded in privacy and unpredictability and regulated by a complex mix of […]