VB Takeaways: The Truth About Generative AI For Customer Service

 

In this section of Institute for Experiential AI Executive Director Usama Fayyad’s takeaways from conversations with executives and leaders at VentureBeat Transform, Usama shares his thoughts on the last of three roundtable discussions, in which attendees discussed the potential and challenges of using generative AI for customer service.

 

Generative AI for customer service

This roundtable explored ways to leverage LLMs for Customer Service. It’s a topic that gets a lot of attention. The discussion we had attempted to sort out misunderstandings in this area. For instance, some attendees thought this is an easier, more well-defined application area — which it may appear to be on the surface — but it’s actually much more complex than it appears. Others think the area is “safe” in that the data from a call center belongs to the company. In reality, there may be issues with personally identifiable information and other privacy considerations. There’s also a lack of clear ownership because many call centers are outsourced and accessing all of the data is a challenge. We explore leveraging AI to improve customer service in the insurance industry in our on-demand webinar, which you can watch here

Some startups claim they have a new generation of more robust and reliable chatbots that leverage generative AI. However, in my many years of exposure to this field, I have yet to see anything of the sort. I remember being on the advisory board of Abe.AI. It was one of the many startups in the space trying to build chatbots using robust conversational AI with the ability to seamlessly hand off escalations to (what we call tier2 and tier3) human customer service representatives and managers. This handoff was always challenging. I am eager to see if generative AI will change this, but I remain skeptical until I see real evidence of large-scale deployment. At least one startup in the room claimed they are there already. In the back of my mind I worry about the opaque functions of LLM models and their black box nature making it even more difficult to understand the outcomes and recommendations coming from models. 

I am eager to continue my conversations at future events, such as our annual business leaders conference, which will teach attendees how to lead with AI responsibly.

Latest Posts

Fighting Antimicrobial Resistance Research with Artificial Intelligence

Aligning interests in the fight against antibiotic resistance will require greater cooperation between industry and academia. Antimicrobial resistance is one of those civilizational perils that leaves little room for hope—a borderless, humanity-wide hazard whose central challenge is as much about human nature as it is about natural forces. And to attend a conference on antimicrobial […]

Harnessing AI to Turn Complexity Into Planet Saving Innovations

New AI for Climate and Sustainability (AI4CaS) focus area will use research, entrepreneurship, and training to fight climate change. The Earth’s climate is determined by an intricate web of ecosystems that interact across local and global scales. Turning that complexity into actionable insights, like building coastal resilience or predicting natural disasters, is where human-centered and […]

New Institute Collaboration Aims to Prepare Public Health Officials For The Next Pandemic

The Institute for Experiential AI (EAI) at Northeastern University is pleased to announce an innovative new collaboration with the Center for Advanced Preparedness and Threat Response Simulation (CAPTRS) to improve public health officials’ response to the next pandemic. Under the partnership, the institute is developing a large language model (LLM) that will simulate emerging global […]

Institute Researchers Develop a More Accessible Way to Conduct Social Media Research Using Volunteers

What role is social media playing in rising political polarization? How does social media activity affect mental health in teens? Does social media play a role in the proliferation of misinformation and conspiracy theories? Researchers trying to answer these questions often must first get access to data from social media companies. But those platforms have […]

World Suicide Prevention Day: Can AI Help Researchers Better Understand Factors that Contribute to Suicide?

  Studying mental illness poses a uniquely thorny challenge. Researchers have to rely on subjective reports and behavioral analysis to make diagnoses, but the idiosyncrasies of mental illness all but guarantee that the picture remains fuzzy. For suicide, the problem is even stickier, shrouded in privacy and unpredictability and regulated by a complex mix of […]