At VentureBeat Transform, a leading event on applied AI for enterprise business and technology decision makers held in San Francisco, Institute for Experiential AI Executive Director Usama Fayyad recently joined Global CIO at Ernst and Young Jeff Wong for a well-attended fireside chat exploring the complexities of generative AI today.
While talking with executives and leaders, Usama noticed a natural divide between two camps: One fervently claiming great potential for the capabilities of Large Language Models (LLMs) — and another, larger group refreshingly asking critical questions, including:
How and where does AI fit in my business?
Does AI really accelerate work, or does it slow it down?
What are the risks?
How do I deal with the “hallucinations” of Generative AI and ChatGPT?
Below, and in ensuing posts, Usama offers some key takeaways from the conference.
AI Still Needs A Human In The Loop
Is generative AI leapfrogging classical AI for customer interactions and experience? My longtime friend and Senior Vice President and Chief Data Officer at Intuit, Ashok Srivastava, discussed the opportunity to integrate generative AI into large consumer products like TurboTax and Credit Karma, and into small business products like QuickBooks and Mailchimp. The work he described was impressive, as Intuit’s team thought through the customer experience and tried to address real pain points. I later asked Ashok how much of what he described was generative AI vs. classical, or predictive, AI. He said that predictive AI still plays a central role, but with interfaces that involve natural language processing (NLP), the large language models (LLMs) have been a very useful tool. He certainly believes this technology is accelerating human work but acknowledged that leveraging a human-in-the-AI-loop to apply judgment and catch issues and errors is critical. Human-in-the-loop and human-centric AI are also key themes we advocate for at the Institute for Experiential AI. Hear more from Ashok at our October AI business conference.
Natural Language User Interface (NLUI) has its work cut out
Several tech vendors, including Intel, seem to think NLUI is the future of sorting through business intelligence and finance data. But so far, I haven’t seen major advances in this arena. This problem has been around for a long time, so while I think it’s an interesting direction, I’m waiting to see major advances — especially in analytics.
The industry needs to make room for Responsible AI
A panel with representatives from Google and the Association for Computing Machinery (ACM) discussed efforts to navigate the many risks of generative AI. It’s hard to make serious headway on this topic because it’s not very well understood, but I thought Jen Carter from Google was very sincere in trying to answer the tricky questions. The ACM is trying to take a proactive role in creating a framework for responsible AI — work that is being aided by our Director of Research Ricardo-Baeza Yates, who serves on ACM’s ethics committee and has been contributing to the new framework’s evolution and growth. Our institute has also developed a proven framework for deploying AI responsibly and we are applying it with many partner organizations.
Organizations are looking for practical AI advice, not vague promises
The leaders of four companies shared their experiences and learnings from using generative AI for their businesses. The most outspoken leader was DataRobot’s new CEO Debanjan Saha. Many of his messages were around the importance of predictive AI and how his company is approaching and benefiting from generative AI. The other organizations were Baptist Health (two panelists representing the user community) and Lake Dai of Carnegie Mellon University (representing issues in applied AI and the use of the technology in financial services). The main theme was raising awareness about the importance of the technology for accelerating work and operations. The view from Baptist Health was a good injection of conservatism and awareness of issues in a regulated environment. The leaders did a good job explaining how they think about using predictive AI in that context, but I would have liked to see a more detailed discussion of actual issues around real implementations rather than just the potential and possibilities of the technology.
In my healthcare experience, both from my own startup and working with many of these organizations, I see a lot of room to leverage AI as a flexible way to digitize and improve patient, payer, and provider experiences.
Get more of Usama’s thoughts on AI spurred by the event:
Thoughts on How McDonald’s and Fidelity Are Thinking About Generative AI
Experts Discuss Problems, Potential of Large Language Models (LLMs)
Will Generative AI Upend Business? Executives Debate at VentureBeat Transform Conference
How the C Suite really views generative AI: Insights from VentureBeat Transform
VB Takeaways: The Truth About Generative AI For Customer Service