Agentic AI: Institute for Experiential AI Position

By
No items found.
January 28, 2025
Share this post
Agentic AI: Institute for Experiential AI Position

Cutting Through the Chaos of Agentic AI

Introduction

The growing hype around agentic AI has sparked excitement, skepticism, and debate. At its core, agentic AI refers to systems that can operate autonomously and collaboratively, often with multiple specialized models working together to solve complex problems. But what does this really mean for businesses?

Recognizing the frenzy surrounding agentic AI, including the trend of companies relabeling themselves as “agentic” to capture attention, Northeastern University’s Institute for Experiential AI aims to cut through the noise. In keeping with our mission to provide practical, informed perspectives, we offer clarity and insights on agentic AI in this paper.

Our experts’ perspective on this topic is informed by research and applied work with industry partners building AI systems that solve complex business problems. Much of that work comes down to designing human-in-the-loop AI systems and building smaller models optimized for efficiency and ROI - so if you count humans, our AI work involved multiple agents all along.

"While agentic AI has captured the imagination of industry leaders, it has also brought a great deal of confusion. Amidst the hype, executives should resist the urge to be the first to deploy the technology and instead focus on the business use cases that ultimately determine the worth of any emerging technology."
— Usama Fayyad, executive director, Institute for Experiential AI

Understanding Agentic AI: What It Is and Isn’t

Agentic AI is not a new concept. We have seen waves of work involving agents, multi-agents, agent-based systems, and more for decades.  In the 1990s, formal communities and conferences emerged on AI-based agents and agent-based AI approaches. Many federal and DoD funding programs were established and much work took place in the AI research community — resulting in actual solutions and many uses of agents in simulations and modeling interactions, negotiations, and game-theoretic strategies.  Recently, however, two factors caused a big wave of hype that took hold on a large scale:

  1. The higher levels of awareness of AI capabilities, along with impressive capabilities demonstrating the success of generative AI in many fields.
  2. The ease of creating agents that can perform natural language processing, speech recognition, or video and data analysis due to the wide availability of tools for these tasks, which used to be a difficult hurdle for prior efforts to build flexible and adaptive agents.

This has led to the latest step in a gradual shift from general large language models (LLMs) to smaller, specialized agents that collaborate to autonomously perform specific tasks. For instance:

  • In cybersecurity, agentic AI could involve a network of specialized models, each working to find different vulnerabilities, communicate insights, and collectively address threats.
  • In software development, tools like Copilot are evolving into agent-based systems, where individual models focus on specific tasks such as UI design, framework integration, or code optimization.
  • OpenAI recently released its first agent, which it showed autonomously reserving a table and ordering groceries for delivery 
"Agentic AI may sound revolutionary, but in many respects, it's just a new term for something we have been building for years. At the Institute for Experiential AI, we've collaborated with partners to design networks of smaller, specialized models that work together to perform complex tasks in areas like gene discovery."
— Sam Scarpino, director of AI + Life Sciences, Institute for Experiential AI

So what is agentic AI and what is not agentic AI?  

In Fayyad’s view,  AI agents are application programs designed to perform non-deterministic tasks on their own. They are able to act autonomously but can also utilize help, information, or collaborate with other agents. The other “agents” could be human experts or helpers.  They could also be generative AI models, or they could be LLMs designed to provide one or more steps towards an answer or solution.

Fayyad believes one of the requirements of an “AI agent” is the ability to deal with ambiguity in the task description, in the context or environment, or in expected outcomes. There is a requirement that the agent can take in communications (help, partial solutions, hints, answers, etc.) from outside sources (likely other agents). It should also be able to communicate outcomes, recommendations, hints, findings, or partial solutions. Finally, it appears that generally “agentic AI” requires multiple agents to be involved.

Examples of what is not agentic AI:  Automation of tasks, or RPA, would not be considered agentic in this view.  Generally speaking, an application that processes a known process with known inputs and outputs is not an example of agentic AI.  An agent that calls existing LLMs to advance one or more steps towards an answer or solution could be considered agentic if it uses more than one such model.

The Market Challenge: Confusion and Urgency

In our discussions about agentic AI with business leaders, confusion reigns. They want to understand if and how agentic AI fits into their business. Some dismiss it as hype; others recognize its transformative potential but are struggling to operationalize it.

"We need to be careful to set appropriate expectations. LLMs, RAG [retrieval augmented generation] and Agentive systems can do many things, but there are always opportunities for improvement."
— Ken Church, senior principal research scientist, Institute for Experiential AI

As AI systems complete increasingly complex tasks autonomously, the risks for unintended consequences will rise, as  Financial Times and the Wall Street Journal have highlighted.

"As AI agents with an ability to perform non-deterministic tasks autonomously become integrated into larger systems, the room for amplification of any error increases drastically. This is especially true if the broader system involves communication between multiple AI agents. While the human-in-the-loop approach is often referred to as a panacea, such complex multi-agent AI networks make it more difficult to determine when and how the human intervention can be guaranteed to prevent snowballing of errors and miscommunication."
— Cansu Canca,  director of Responsible AI Practice, Institute for Experiential AI

Where We Stand

Agentic AI is the next step in the natural evolution of AI technology toward taking action to solve more complex problems. In fact, we believe that it is a natural framework for decomposing large language models into smaller language models that have specialized, narrow focus. Training smaller, specialized models is significantly easier and cheaper than training large models.

A small language model can also be fine-tuned and re-trained as needed. If you are attempting to do this on a large language model, there is an exponential increase in complexity beyond the expense of training the LLM with a huge number of parameters. The testing and verification of this fine-tuning also won’t negatively impact other skills and capabilities (or languages) supported by the single model. An agent-based model that utilizes a large number of smaller, specialized models is thus likely to be a cheaper, more stable, and more effective architecture. More broadly, the need for fine-tuning and learning from errors and human interventions is a necessary part of building effective AI models. 

For years, the Institute for Experiential AI has been building sophisticated, human-in-the-loop AI systems using the same principles that underpin the successful development of agentic AI. It’s important to note that agentic AI abilities will continue to evolve as the technology advances.

Discover how we help organizations navigate the rapidly evolving AI and data landscape and leverage cutting-edge expertise and tailored solutions to drive innovation and success.

Further Reading

Other News

No items found.