How the C Suite really views generative AI: Insights from VentureBeat Transform

In this part of Institute for Experiential AI Executive Director Usama Fayyad’s takeaways from conversations with executives and leaders at VentureBeat Transform, Usama shares his thoughts on the second of three roundtable discussions that he participated in, where attendees discussed how tech teams are talking to their boards and C-suites about how generative AI can be leveraged safely and aligned with the company’s overall goals and objectives.


Roundtable: How the C-suite views generative AI

This roundtable discussion brought together a very engaged group of attendees. The main theme was the importance of explaining and demystifying the technology from an executive and board perspective. I shared my experiences discussing the technology with boards of directors and executive teams at large enterprises (as well as heads of funds and limited partners of large private equity firms) in terms that made sense to those audiences.

During the digital transformation, everyone saw some very established companies—think Kodak and JCPenney—go out of business, even though they’d been around for more than a century, because they didn’t embrace that transformation fast enough or correctly. There’s a similar fear with AI and a lot of urgency. Shareholders might ask if boards are “missing the boat” because competitors adopted something that has the potential to replace their business. This FOMO (fear of missing out) is a genuine concern for boards that want to insure this missed opportunity does not happen on their watch.

The discussion brought out great points on governance, regulation, identifying risks and tracking those risks, and finished with a discussion of inflated expectations in the minds of board members and executives. Those unrealistic expectations are the result of the hype around generative AI’s capabilities and competitive threats. LLMs can’t reason. LLMs do not understand consequences. They can make silly mistakes. Educating the boards on the limitations of this technology is critical. LLM models are often called “stochastic parrots” because they have no understanding of what they say, they can’t tell you where their answers come from, and they will even propagate misinformation if it has enough frequency and buzz.

It is healthy and crucial for boards to pay close attention to AI. Historically, some boards were complacent or even asleep at the wheel, allowing executive teams to stick to traditional business practices without acknowledging technology advances. Challenging this inertia is very healthy since it can lead to a sudden and rude awakening, leaving everyone wondering what happened to the business. I discuss how business can avoid a reckoning like this in our on-demand webinar, Generative AI: How to Keep AI Human-Centric and Productive in the ChatGPT Era. 

Dive deeper into my VentureBeat Transform insights with his ideas around using generative AI for customer service and the long, complex road to trusted AI solutions.

Latest Posts

Fighting Antimicrobial Resistance Research with Artificial Intelligence

Aligning interests in the fight against antibiotic resistance will require greater cooperation between industry and academia. Antimicrobial resistance is one of those civilizational perils that leaves little room for hope—a borderless, humanity-wide hazard whose central challenge is as much about human nature as it is about natural forces. And to attend a conference on antimicrobial […]

Harnessing AI to Turn Complexity Into Planet Saving Innovations

New AI for Climate and Sustainability (AI4CaS) focus area will use research, entrepreneurship, and training to fight climate change. The Earth’s climate is determined by an intricate web of ecosystems that interact across local and global scales. Turning that complexity into actionable insights, like building coastal resilience or predicting natural disasters, is where human-centered and […]

New Institute Collaboration Aims to Prepare Public Health Officials For The Next Pandemic

The Institute for Experiential AI (EAI) at Northeastern University is pleased to announce an innovative new collaboration with the Center for Advanced Preparedness and Threat Response Simulation (CAPTRS) to improve public health officials’ response to the next pandemic. Under the partnership, the institute is developing a large language model (LLM) that will simulate emerging global […]

Institute Researchers Develop a More Accessible Way to Conduct Social Media Research Using Volunteers

What role is social media playing in rising political polarization? How does social media activity affect mental health in teens? Does social media play a role in the proliferation of misinformation and conspiracy theories? Researchers trying to answer these questions often must first get access to data from social media companies. But those platforms have […]

World Suicide Prevention Day: Can AI Help Researchers Better Understand Factors that Contribute to Suicide?

  Studying mental illness poses a uniquely thorny challenge. Researchers have to rely on subjective reports and behavioral analysis to make diagnoses, but the idiosyncrasies of mental illness all but guarantee that the picture remains fuzzy. For suicide, the problem is even stickier, shrouded in privacy and unpredictability and regulated by a complex mix of […]