In this section of Institute for Experiential AI Executive Director Usama Fayyad’s takeaways from conversations with executives and leaders at VentureBeat Transform, Usama shares some takeaways from a couple of particularly insightful discussions around large language models (LLMs).
An Interesting Approach to Building LLMs
I had a chance to discuss some deeper topics with Hassan Sawaf, the founder and CEO of one of the exhibitors, AIXplain. We spoke about the challenges of building LLMs. AIXplain is tackling the problem from three interesting angles: How to simplify the use of the tech, how to apply it successfully in English and other languages (e.g. Arabic), and how to create a marketplace where people can build models and applications, but then subsidize the investment by allowing others to use them. I was surprised at the size of the marketplace for such a young company: It has over 35,000 applications available that leverage AI. We had a deep discussion on the hard topic of how to make the models understandable so you can gain user and corporate/customer trust.
LLMs have a size problem
Kjell Carlsson from Domino Data Lab made some great points in his session about the flaws of large generative AI models. Often, “smaller is more beautiful” and practical, as it demands less costs for training and inference, more focus, and fewer errors. I agree with that. I also believe smaller, specialized models can be more stable and are definitely easier to maintain and update. Carlsson argued the models are getting out of control because of their size, making them harder to train, maintain, revise, etc. I think sizing the models down and focusing on narrow capabilities will be key to leveraging the LLMs of the future. You don’t need the biggest LLMs for most tasks. While having a model that can deal with many aspects of, say, english language and many general topics, this is rarely needed in focused business settings and applications.
Concluding Thoughts
LLMs were hardly the only thing experts discussed at the conference. They also discussed generative AI more broadly, including implementation issues and strategy with the technology.
I am eager to continue my conversations at future events, such as our annual business leaders conference, which will teach attendees how to lead with AI responsibly.