Executives Eschew Generative AI Hype to Explore Real Implementation Issues

At VentureBeat Transform, a leading event on applied AI for enterprise business and technology decision makers held in San Francisco, Institute for Experiential AI Executive Director Usama Fayyad recently joined Global CIO at Ernst and Young Jeff Wong for a well-attended fireside chat exploring the complexities of generative AI today.

While talking with executives and leaders, Usama noticed a natural divide between two camps: One fervently claiming great potential for the capabilities of Large Language Models (LLMs) — and another, larger group refreshingly asking critical questions, including:

  • How and where does AI fit in my business?

  • Does AI really accelerate work, or does it slow it down?

  • What are the risks?

  • How do I deal with the “hallucinations” of Generative AI and ChatGPT?

Below, and in ensuing posts, Usama offers some key takeaways from the conference.

 

AI Still Needs A Human In The Loop

Is generative AI leapfrogging classical AI for customer interactions and experience? My longtime friend and Senior Vice President and Chief Data Officer at Intuit, Ashok Srivastava, discussed the opportunity to integrate generative AI into large consumer products like TurboTax and Credit Karma, and into small business products like QuickBooks and Mailchimp. The work he described was impressive, as Intuit’s team thought through the customer experience and tried to address real pain points. I later asked Ashok how much of what he described was generative AI vs. classical, or predictive, AI. He said that predictive AI still plays a central role, but with interfaces that involve natural language processing (NLP), the large language models (LLMs) have been a very useful tool. He certainly believes this technology is accelerating human work but acknowledged that leveraging a human-in-the-AI-loop to apply judgment and catch issues and errors is critical. Human-in-the-loop and human-centric AI are also key themes we advocate for at the Institute for Experiential AI. Hear more from Ashok at our October AI business conference.

 

Natural Language User Interface (NLUI) has its work cut out 

Several tech vendors, including Intel, seem to think NLUI is the future of sorting through business intelligence and finance data. But so far, I haven’t seen major advances in this arena. This problem has been around for a long time, so while I think it’s an interesting direction, I’m waiting to see major advances — especially in analytics.

 

The industry needs to make room for Responsible AI 

A panel with representatives from Google and the Association for Computing Machinery (ACM) discussed efforts to navigate the many risks of generative AI. It’s hard to make serious headway on this topic because it’s not very well understood, but I thought Jen Carter from Google was very sincere in trying to answer the tricky questions. The ACM is trying to take a proactive role in creating a framework for responsible AI — work that is being aided by our Director of Research Ricardo-Baeza Yates, who serves on ACM’s ethics committee and has been contributing to the new framework’s evolution and growth. Our institute has also developed a proven framework for deploying AI responsibly and we are applying it with many partner organizations.

 

Organizations are looking for practical AI advice, not vague promises 

The leaders of four companies shared their experiences and learnings from using generative AI for their businesses. The most outspoken leader was DataRobot’s new CEO Debanjan Saha. Many of his messages were around the importance of predictive AI and how his company is approaching and benefiting from generative AI. The other organizations were Baptist Health (two panelists representing the user community) and Lake Dai of Carnegie Mellon University (representing issues in applied AI and the use of the technology in financial services). The main theme was raising awareness about the importance of the technology for accelerating work and operations. The view from Baptist Health was a good injection of conservatism and awareness of issues in a regulated environment. The leaders did a good job explaining how they think about using predictive AI in that context, but I would have liked to see a more detailed discussion of actual issues around real implementations rather than just the potential and possibilities of the technology. 

In my healthcare experience, both from my own startup and working with many of these organizations, I see a lot of room to leverage AI as a flexible way to digitize and improve patient, payer, and provider experiences.

 

Get more of Usama’s thoughts on AI spurred by the event:

Thoughts on How McDonald’s and Fidelity Are Thinking About Generative AI

Experts Discuss Problems, Potential of Large Language Models (LLMs)

Will Generative AI Upend Business? Executives Debate at VentureBeat Transform Conference

How the C Suite really views generative AI: Insights from VentureBeat Transform

VB Takeaways: The Truth About Generative AI For Customer Service

Latest Posts

Fighting Antimicrobial Resistance Research with Artificial Intelligence

Aligning interests in the fight against antibiotic resistance will require greater cooperation between industry and academia. Antimicrobial resistance is one of those civilizational perils that leaves little room for hope—a borderless, humanity-wide hazard whose central challenge is as much about human nature as it is about natural forces. And to attend a conference on antimicrobial […]

Harnessing AI to Turn Complexity Into Planet Saving Innovations

New AI for Climate and Sustainability (AI4CaS) focus area will use research, entrepreneurship, and training to fight climate change. The Earth’s climate is determined by an intricate web of ecosystems that interact across local and global scales. Turning that complexity into actionable insights, like building coastal resilience or predicting natural disasters, is where human-centered and […]

New Institute Collaboration Aims to Prepare Public Health Officials For The Next Pandemic

The Institute for Experiential AI (EAI) at Northeastern University is pleased to announce an innovative new collaboration with the Center for Advanced Preparedness and Threat Response Simulation (CAPTRS) to improve public health officials’ response to the next pandemic. Under the partnership, the institute is developing a large language model (LLM) that will simulate emerging global […]

Institute Researchers Develop a More Accessible Way to Conduct Social Media Research Using Volunteers

What role is social media playing in rising political polarization? How does social media activity affect mental health in teens? Does social media play a role in the proliferation of misinformation and conspiracy theories? Researchers trying to answer these questions often must first get access to data from social media companies. But those platforms have […]

World Suicide Prevention Day: Can AI Help Researchers Better Understand Factors that Contribute to Suicide?

  Studying mental illness poses a uniquely thorny challenge. Researchers have to rely on subjective reports and behavioral analysis to make diagnoses, but the idiosyncrasies of mental illness all but guarantee that the picture remains fuzzy. For suicide, the problem is even stickier, shrouded in privacy and unpredictability and regulated by a complex mix of […]