Northeastern’s Institute for Experiential AI presents “Generative AI: From the Classroom to the Economy”— a panel on generative AI’s Impact on Work and Learning, Announces Three New Online Courses

By: Zach Winn

Generative AI is already revolutionizing the way we work, teach, and learn. But how can companies reap the benefits of AI without falling into traps caused by its limitations? How can teachers and students use AI to augment their abilities and unleash their creativity?

Those were among the topics addressed at an event hosted by the Institute for Experiential AI (EAI) at Northeastern University on April 25. In addition to panel discussions, EAI also used the occasion to announce the creation of three new online courses that will serve to educate students, workers, and the broader public on AI.

“This is quite a moment,” David Madigan, the provost of Northeastern University, said in the event’s opening remarks. “It’s astonishing how quickly things have changed and how this has become public discourse just in the last couple of months. People are asking a lot of questions: What is AI? What does it mean? What will be its impact on my job, my life, and the world in general?”

EAI’s panelists, many of them affiliated faculty at EAI, sought to answer those questions. They explained how generative AI works, how they’re helping organizations adopt it, and how they’ve used the technology to improve their own work in the classroom.

Those efforts align with the new courses, which aim to equip learners from all backgrounds with the skills to live and work with AI.

One of those online courses will explore the legal dimensions of AI in the workplace and in the practice of law more broadly. Another will help businesses understand what it takes to get the most out of generative AI and other techniques. The third will serve as a free online guide to AI and is intended for a general audience.

“Education will be essential as we find our way in the age of AI,” Michael Bennet, EAI's director of education curriculum and business lead for responsible AI, told the audience.

The event brought together experts in education, philosophy, AI, marketing, the life sciences, and more in order to examine the implications of generative AI from all angles. Byron Wallace, an associate professor at Northeastern University’s Khoury College who is also a member of EAI's Faculty Leadership Committee, opened up the talks with an explanation of how generative AI systems work. He also discussed common problems, such as their penchant for making up facts — what the industry refers to as “hallucinations.”

“These are truly remarkable technologies, but they have substantial limitations, and the important thing is that it’s not clear that scale will fix any of these things,” Wallace said.

An overarching theme of the event was that in order for generative AI to be used safely and effectively, organizations need to involve a human to compensate for the technology’s limitations.

“These systems are saying stuff that they don’t understand, which means if you’re about to start using the technology, you have to consider how a system that’s incapable of understanding what it’s saying — or reasoning about it — can function,” EAI executive director Usama Fayyad told the audience. “The answer is that it won’t function unless you have human intervention.”

Experts in the second panel outlined how generative AI will change work to be less repetitive and more editorial. They explained what universities, governments, and companies need to do to prepare for the future of work.

“It’s going to affect just about every experience we have professionally and socially,” said Matthew Goodwin, an EAI faculty affiliate and associate professor at Northeastern University’s Khoury College. “I’m excited about it, because it has great potential, but at the same time we have some serious concerns about generative AI, because it could do just as much harm as it does good.”

The discussions complemented work being done at EAI through its Responsible AI (RAI) services, which help organizations ensure they are using AI ethically. Associate professor Cansu Canca, EAI’s ethics lead, told the audience ethical safeguards needn’t come at the expense of speed.

“Responsible AI enhances technology, so we are not here to be the police or create bottlenecks for companies,” Canca said. “We want to integrate ethics efficiently into the organizational structure and innovation workflow.”

In a closing discussion, Sam Scarpino, EAI’s director of AI + Life Sciences, explained why Northeastern and EAI are well positioned to take the lead on efforts around generative AI.

“We recently sat down with the head of AI at a big pharmaceutical company, and they were interested in four things: Upskilling, talent, sponsored research as a point of concept, and evaluating opportunities with potential vendors using AI models,” Scarpino said. “Northeastern has all of those things in spades, so Northeastern and EAI are where people should be engaging with these challenges.”

Usama Fayyad closed out the proceedings by noting the amazing rapidity of change, and how all of the things discussed were evidence that Northeastern, true to its experiential roots, embraced the changing technology, not just in the sciences, but in the humanities as well. He re-emphasized how important it was that the Institute was not just a destination businesses who need real business solutions, but in finding them, the institute would create opportunities for co-ops and together they would define the solution, implement the solution and come out with a working solution and simultaneously groom the next generation of AI leaders.   To sum up, he said “We’re here to help.”