Experts Discuss Problems, Potential of Large Language Models (LLMs)

In this section of Institute for Experiential AI Executive Director Usama Fayyad’s takeaways from conversations with executives and leaders at VentureBeat Transform, Usama shares some takeaways from a couple of particularly insightful discussions around large language models (LLMs).


An Interesting Approach to Building LLMs

I had a chance to discuss some deeper topics with Hassan Sawaf, the founder and CEO of one of the exhibitors, AIXplain. We spoke about the challenges of building LLMs. AIXplain is tackling the problem from three interesting angles: How to simplify the use of the tech, how to apply it successfully in English and other languages (e.g. Arabic), and how to create a marketplace where people can build models and applications, but then subsidize the investment by allowing others to use them. I was surprised at the size of the marketplace for such a young company: It has over 35,000 applications available that leverage AI. We had a deep discussion on the hard topic of how to make the models understandable so you can gain user and corporate/customer trust.


LLMs have a size problem

Kjell Carlsson from Domino Data Lab made some great points in his session about the flaws of large generative AI models. Often, “smaller is more beautiful” and practical, as it demands less costs for training and inference, more focus, and fewer errors. I agree with that. I also believe smaller, specialized models can be more stable and are definitely easier to maintain and update. Carlsson argued the models are getting out of control because of their size, making them harder to train, maintain, revise, etc. I think sizing the models down and focusing on narrow capabilities will be key to leveraging the LLMs of the future. You don’t need the biggest LLMs for most tasks. While having a model that can deal with many aspects of, say, english language and many general topics, this is rarely needed in focused business settings and applications.


Concluding Thoughts

LLMs were hardly the only thing experts discussed at the conference. They also discussed generative AI more broadly, including implementation issues and strategy with the technology.

I am eager to continue my conversations at future events, such as our annual business leaders conference, which will teach attendees how to lead with AI responsibly.

Latest Posts

Fighting Antimicrobial Resistance Research with Artificial Intelligence

Aligning interests in the fight against antibiotic resistance will require greater cooperation between industry and academia. Antimicrobial resistance is one of those civilizational perils that leaves little room for hope—a borderless, humanity-wide hazard whose central challenge is as much about human nature as it is about natural forces. And to attend a conference on antimicrobial […]

Harnessing AI to Turn Complexity Into Planet Saving Innovations

New AI for Climate and Sustainability (AI4CaS) focus area will use research, entrepreneurship, and training to fight climate change. The Earth’s climate is determined by an intricate web of ecosystems that interact across local and global scales. Turning that complexity into actionable insights, like building coastal resilience or predicting natural disasters, is where human-centered and […]

New Institute Collaboration Aims to Prepare Public Health Officials For The Next Pandemic

The Institute for Experiential AI (EAI) at Northeastern University is pleased to announce an innovative new collaboration with the Center for Advanced Preparedness and Threat Response Simulation (CAPTRS) to improve public health officials’ response to the next pandemic. Under the partnership, the institute is developing a large language model (LLM) that will simulate emerging global […]

Institute Researchers Develop a More Accessible Way to Conduct Social Media Research Using Volunteers

What role is social media playing in rising political polarization? How does social media activity affect mental health in teens? Does social media play a role in the proliferation of misinformation and conspiracy theories? Researchers trying to answer these questions often must first get access to data from social media companies. But those platforms have […]

World Suicide Prevention Day: Can AI Help Researchers Better Understand Factors that Contribute to Suicide?

  Studying mental illness poses a uniquely thorny challenge. Researchers have to rely on subjective reports and behavioral analysis to make diagnoses, but the idiosyncrasies of mental illness all but guarantee that the picture remains fuzzy. For suicide, the problem is even stickier, shrouded in privacy and unpredictability and regulated by a complex mix of […]