No items found.
No items found.

Gary Marcus on Finding a Better Model for Artificial General Intelligence

SPEAKERS: 
No items found.
LOCATION:
  
Sep
  
30
  
2022
  -  
  
  

by Tyler Wells Lynch

September 30, 2022

Conversations about artificial general intelligence (AGI) always seem to involve predictions: How far away are we? Employees at OpenAI say they think it’ll happen within the next 15 years. Elon Musk is even more bullish, pointing to 2029 as the banner year.

But for scientist, best-selling author, and entrepreneur Gary Marcus, these predictions are, to put it lightly, unrealistic. A Distinguished Lecturer at the Fall Seminar Series hosted by the Institute, Marcus explained why: Each one hinges on an assumption that, given enough data, deep learning neural networks will eventually be able to project forth an intelligence surpassing that of humans. “Scale is all you need,” as the saying goes, is nothing short of wishful thinking.

Don’t mistake Marcus’ skepticism for dismissal. He was quick to praise deep learning’s great triumphs: AlphaFold’s predictive protein modeling, DALL-E’s image generations, and DeepMind’s mastery of Go, to name a few.

But as impressive as these models are, their capabilities are fairly narrow. None comes close to a definition of AGI that would satisfy the participants of the famed 1956 Dartmouth Conference, and, as Marcus argues, they never will if they are to rely on deep learning to get there.

To achieve artificial general intelligence, AI needs to excel at more than just learning. It needs to have deep, conceptual knowledge of objects; it needs to understand the difference between entities over time; and it needs to internalize human values. Benchmarks for such a system need to move beyond accuracy and language parameters to include more conscientious criteria like reading comprehension and narrative insight. Said another way, AI needs to get better at abstraction.

“It’s fundamentally a long-tail problem,” Marcus said. “Deep learning is really good if you have lots of data about routine things, but really bad if you have little data about unusual but important things.”

So what would a better foundation model look like? What’s needed to build an AI that’s both generally intelligent and trustworthy? To answer that, we have to go back to the drawing board…

1.) The hybrid neuro-symbolic approach

Marcus argues for a hybrid model, one that incorporates both deep learning and classical symbolic operations. Symbolic AI was the dominant mode of AI research up until the 1990s, relying on human-readable representations of logic problems. Crucially, this involved human oversight, and the human element is the main reason why symbolic AI, to this day, surpasses deep learning at generalization—the ability of a system to incorporate data found outside its training distribution.

A good example is in video game exploration: Symbol systems are better than deep learning at open-ended tasks because they don’t have to relearn rules from scratch every time they’re exposed to new inputs. Symbolic systems, drawing on an innate feature of human intelligence, are able to re-use previously learned information.

For similar reasons, symbolic AI excels at temporal reasoning—the ability to discern an answer based on when the question is asked. For example, the question “who is president?” can’t be answered by quantifying mere mentions on the internet; it needs to contextualize the time when the question is asked.

But here’s the nub: Deep learning, coding, Bayesian formalism, symbolic AI—these are all just tools in a toolbox. According to Marcus, a hybrid model allows researchers to free themselves of the burden of approaching every problem with only deep learning at their disposal.

2.) Conceptual knowledge and compositionality

Deep learning systems struggle with conceptual knowledge about physical objects and how they relate to one another. What appears common sense to a preschooler is often a herculean insight for AI. For example, a child understands that breaking a bottle of water will result in a spill. When tasked with the same prediction, deep learning programs tend to return nonsensical answers like, “the water will probably roll.”

Much of human knowledge has to do with meta-congition—the awareness of one’s awareness. This is important in social contexts as well as in more analytical problems like understanding why fictional characters behave the way they do. Four-year-olds have an innate understanding of people and objects as independently existing things. They’re able to reason about how things work, what other people might believe, and then adjust their behavior accordingly.

AI can do none of that. A successful foundation model needs to have an innate understanding of objects as independent entities with fundamental properties, behaviors, and, in the case of people, desires. More challenging yet, it needs to understand how those objects relate to one another.

3.) Moral reasoning and human values

Examples abound of large language models failing to comprehend simple sentences. In most cases, they confound the statistical situation of word sequences with an accurate model of the world as it really is. It’s no surprise these systems are prone to bias, misinformation, and unethical recommendations.

Take, for example, the horrifying case of a GPT-3 chatbot trained to provide medical advice: When asked by a fake patient in a training scenario, “Should I kill myself?” the bot responded, “I think you should.” The chatbot, like all chatbots, was only modeled to predict the next word in a sequence; its training data was largely drawn from interactions between people on the internet. Those interactions tend to be encouraging in tone, and the consequence is the most damning behavior one could imagine for a crisis counselor.

Okay, so we have a hybrid model, deep conceptual knowledge, and moral reasoning: Toss all that into a stew. Is what comes out AGI? Maybe, maybe not. As Marcus says, to achieve general AI, we need systems that are as adept at understanding as they are at learning, and the process of imbuing machines with deep understanding will not be an easy one given current approaches.

Intelligence is complex and multifaceted—emotional, symbolic, theoretical, creative, analogical. Considering how difficult it is to even define intelligence, the question is warranted: If we want to build AGI, is the current strategy working?

Looking for more? Catch a replay of this talk here and register for an upcoming seminar here.

Keynote and Industry Speakers

No items found.

Northeastern University Speakers

No items found.

Agenda

No items found.
No items found.
No items found.
No items found.
No items found.
Trinity Coakley
Health Science and Psychology Student
Adam Woolley
Director of Assessment, Clinical Professor, Pharmacy and Health Systems Science
Elizabeth Mynatt
Dean of Khoury College of Computer Sciences
Sylvain Jaume
Director, School of Computer Science and Data Analytics
University of New England
Erin O'Neill
Candidate for M.S. in Health Informatics
Annika Schoene
Research Scientist
Doug Sawyer
Medical Director
Maine Medical
Karen Huhn
Professor of Physical Therapy and Program Director
Massachusetts College of Pharmacy and Health Sciences
David Harder
Professor & Director
University of Maine Institute of Medicine
Dan Koloski
Head of Learning Programs, Roux Institute and Professor of the Practice, College of Professional Studies
Northeastern University
Connie Yowell
Senior Advisor to the Provost
Northeastern University
Justin Manjourides
Associate Professor of Public Health and Health Sciences
Northeastern University
Aarti Sathyanarayana
Assistant Professor in Health Science and Computer Science
Northeastern University
Jeanne Latourelle
Senior Vice President, Precision Medicine
Aitia
John Brownstein
SVP and Chief Innovation Officer
Boston Children’s Hospital
Colleen Hole
Innovation Strategic Advisor
Atrium Health/Advocate Health
Jens Rueter
Chief Medical Officer; Medical Director
The Jackson Laboratory; Maine Cancer Genomics Initiative
Predrag Radivojac
Professor of Computer Science and Associate Dean of Research, Khoury College of Computer Science
Northeastern University
Jason Springs
Co-founder & CEO
Endpoint Health
Timothy Ruchti
Director Algorithms
Nihon Kohden Digital Health Solutions LLC
Isaac Kohane
Professor & Chair, Department of Biomedical Informatics
Harvard Medical School
Patricia Geli
Co-founder, Managing Director, and COO
C10 Labs
Crystal Brown
CEO & Co-Founder
CircNova
David Levine
Associate Professor and Clinical Director
Mass General Brigham
Marinka Zitnik
Assistant Professor, Department of Biomedical Informatics
Harvard Medical School
Larry Brilliant
CEO
Evity.AI
Rasu B. Shrestha
Chief Innovation and Commercialization Officer
Advocate Health
John Halamka
Dwight and Dian Diercks President
Mayo Clinic Platform
Hoda Sayed-Friel
Former Executive Vice President
MEDITECH
Yakov Bart
Associate Professor of Marketing at D'Amore-McKim School of Business
Northeastern University
Ricardo Baeza-Yates
Director of Research
Institute for Experiential AI
Michael Workman
Creative Director of the AI Literacy Lab
Northeastern University
Joanna Weiss
Executive Director of the AI Literacy Lab
Northeastern University
James Shanahan
Director of the AI Solutions Hub
Institute for Experiential AI
Sam Scarpino
Director of AI + Life Sciences
Institute for Experiential AI
David De Cremer
Dean of the D'Amore-McKim School of Business
Northeastern University
Ardeshir Contractor
Director of Research for AI4CaS
The Institute for Experiential AI
Manish Worlikar
Heads the Center of Excellence (CoE) for Artificial Intelligence and Advanced Analytics
Fidelity Institutional
Xuning (Mike) Tang
Associate Director of Responsible AI
Verizon
Ashok Srivastava
Senior VP and Chief Data Officer
Intuit
Jay Schuren
Chief Customer Officer
Data Robot
Rudina Seseri
Founder and Managing Partner
Glasswing Ventures
Dan Rosensweig
Chief Executive Officer
Chegg
Peter Norvig
Researcher
Google
Michael Nieset
Partner
Heidrick & Struggles
Dan Lothian
Executive Producer
The World
Aleksandar Lazarevic
Partner
Sigmoid Analytics
John Havens
Director of Emerging Technologies & Strategic Development
IEEE Standards Association
Michelle Gansle
VP of Global Insights and Analytics
McDonald's
Ronke Ekwensi
Vice President, Chief Data Officer
T-Mobile
Cynthia Dwork
Professor of Computer Science
Harvard University
Philip Brey
Professor of Philosophy and Ethics of Technology
The University of Twente
Linda Avery
Former CDAO
Verizon Communications, Federal Reserve Board of New York
Christo Wilson
Affiliate Faculty
Institute for Experiential AI
Cynthia Rudin
Professor of computer science
Duke University
Helen Nissenbaum
Professor
Cornell Tech
Tina Eliassi-Rad
Core Faculty
Institute for Experiential AI
John Basl
Core Faculty
Institute for Experiential AI
Byron Wallace
Core Faculty
Institute for Experiential AI
Sarah Ostadabbas
Core Faculty
Institute for Experiential AI
Alina Oprea
Associate professor, Khoury College of Computer Sciences
Northeastern University
Auroop Ganguly
Director of AI for Climate & Sustainability
Institute for Experiential AI
Ozlem Ergun
Distinguished Professor of Mechanical & Industrial Engineering
Northeastern University
Rai Winslow
Director of Life Science and Medical Research, Roux Institute
Affiliate Faculty, Institute for Experiential AI
Eugene Tunik
Director of AI + Health
Institute for Experiential AI
Melissa Landon
Technology and Business Strategist, AI in Life Sciences
Cyclica (Former)
Aileen Huang-Saad
Associate Professor of Bioengineering and Director of Life Sciences and Engineering
the Roux Institute
Laurent Audoly
Senior Advisor, AI Drug Discovery & Development
Institute for Experiential AI
Jared Auclair
Associate Dean of Professional Programs and Graduate Affairs; Director of the Biopharmaceutical Analysis Laboratory
Northeastern University
Emery Trahan
Interim Dean, D’Amore-McKim School of Business
Northeastern University
Hazel Sive
Dean, College of Science
Northeastern University
Carmen Sceppa
Dean, Bouvé College of Health Sciences
Northeastern University
Uta Poiger
Dean, College of Social Sciences and Humanities
Northeastern University
Beth Mynatt
Dean, Khoury College of Computer Sciences
Northeastern University
Elizabeth Hudson
Dean, College of Arts, Media and Design
Northeastern University
James Hackney
Dean, School of Law
Northeastern University
David Fields
Interim Dean, College of Professional Studies
Northeastern University
Gregory Abowd
Dean, College of Engineering
Northeastern University
David Madigan
Provost and SVP for Academic Affairs
Northeastern University
Lorena Jaume-Palasí
Founder and Executive Director
Ethical Tech Society
Rayid Ghani
Professor of Machine Learning and Public Policy
Carnegie Mellon University
Cansu Canca
Director of Responsible AI Practice
The Institute for Experiential AI
Silvio Amir
Core Faculty
Institute for Experiential AI
Mike Culhane
Group Chief Executive Officer
Pepper Money Group
Don Peppers
Author, Speaker & CX Expert
Vipin Mayar
Head of Customer Knowledge & Strategic Insights
Fidelity Investments
Steve Johnson
Senior Fellow & Advisor
Institute for Experiential AI
Gaia Bellone
Chief Data Scientist for Digital and Marketing
Prudential Financial
Nenshad Bardoliwalla
Chief Product Officer
Data Robot
Sammy Assefa
Head of AI | Research, Development, Innovation
US Bank
Lila Snyder
CEO
Bose
Jennifer Dy
Director of AI Faculty
Institute for Experiential AI
Usama Fayyad
Executive Director, Institute for Experiential AI
Professor of the Practice, Khoury College of Computer Sciences
David Roux‍
Managing Partner at BayPine
Joseph E. Aoun‍
President of Northeastern University
Kate
Assistant Director of Marketing
EAI

Watch Highlights from the Event

Check out the full playlist here.

Hear From Past Attendees

No items found.

Location:

No items found.

Related News

No items found.