News
Health

Can AI Help Scientists Discover What They Don’t Know? One Researcher’s Journey from Cricket to Machine Learning

By
No items found.
July 28, 2023
Share this post
Can AI Help Scientists Discover What They Don’t Know? One Researcher’s Journey from Cricket to Machine Learning

By: Tyler Wells Lynch

Science is often characterized as a process for discovering what we know, but it is just as often a process for discovering what we don’t know. Newspapers may be less inclined to run headlines about scientific ignorance, but the craft of unearthing and questioning our underlying assumptions is no less integral to discovery.

For researchers like Zulqarnain Khan, a research scientist in the AI + Health practice at the Institute for Experiential AI (EAI), mapping out the consequences of those assumptions is not just an area of interest, it’s his stock in trade. And it all began with a game of cricket.

Hawk-Eye

As an undergraduate student at the National University of Science and Technology in Pakistan, Zulqarnain was fascinated by ball-tracking systems—the computer vision tools used in tennis and cricket to predict the trajectory of a struck ball. He wanted to know how it worked, so for his final year project he reverse-engineered what he calls a “five-dollar version” of the Hawk-Eye ball-tracking technology. That project marked his first foray into machine learning (ML).

So how well did it work?

“It worked like a $5 version would work,” Zulqarnain says. “It did fine on simulations and simulated data—like if you make animations of the sport. But it failed horribly on real data.”

That’s a common experience for people who work in machine learning. Models trained on simulated or “idealized” data often fall apart upon contact with real-world conditions. Sometimes, this happens due to a lack of high-quality labeled data—the kind needed to train a ball-tracking system, for example. This type of machine learning is known as supervised learning.

If you're training a model to learn the difference between, say, cats and dogs, supervised learning is helpful because the labels themselves—what’s a cat vs. what’s a dog—are not really up for debate. The system can learn rather quickly the relationship between the inputs and outputs and begin to predict new inputs with a high degree of accuracy. But because supervised learning involves human oversight and a laborious labeling process, it’s pretty resource-intensive.

Unsupervised learning, on the other hand, is where you let the data and the algorithms tell you what’s going on. As a kind of exploratory process, it can help researchers uncover patterns in raw datasets that they didn’t know were there. As Zulqarnain explains, “You try to find structured sources of variation in the data, and then you see whether it aligns with any of your beliefs and then analyze the results.”

Wrong Answers

AI is all about modeling human intelligence, but do humans learn in a way that is properly captured by either supervised or unsupervised learning? Do we merely collect vast reams of data, process them, then spit out some kind of workable pattern?

It’s an ongoing debate in the AI community, with luminaries like Yann LeCun arguing that supervised learning is a “bottleneck” for more generally intelligent AI models. Some EAI researchers even think the entire foundation of language modeling is on shaky ground. For Zulqarnain, the debate indicates a need for more interdisciplinary work—work that leaves room for experts from neuroscience, psychology, linguistics, and more.

“One of my favorite things to do,” he says, “is to explore the underlying assumptions that machine learning methods have, whether or not people acknowledge them, and what the implications of those assumptions are on scientific discovery.”

And that’s a task that gets more challenging by the day. As ML models—in the form of generative AI like ChatGPT and DALL-E—become more and more widespread and easier to use, the risk of misuse increases dramatically.

“When machine learning methods are misused,” Zulqarnain says, “they're not going to tell you that they're misused. They'll just give you the wrong answer.”

Mapping Emotions

Zulqarnain’s theoretical approach to ML makes him well suited to EAI’s AI + Health practice. One of his primary research areas is in the emotion sciences, exploring patterns in physiological and brain scan data and how they relate to behavior. One theory in the field claims emotions like anger and sadness have distinct neurological and physiological fingerprints. But Zulqarnain, drawing from the work of Northeastern psychologists like Lisa Feldman Barrett, questions that view.

Given the messiness of emotions, wouldn't it make sense that certain feelings are idiosyncratic, context-dependent? Feelings can be constructed from a variety of causes and stimuli, all of which have their own brain and physiological patterns that can’t be so readily dismissed as noise.

He says: “The word anger or things that you might identify as anger might show as a very different emotion for people, depending on how you're looking at things—different contexts, different cultures, different languages.”

Now, how does that translate to the challenge of trying to model emotions through data? Mostly as noise. With so much heterogeneity, there’s little course for identifying patterns to imply a genuine emotional fingerprint—that is, without unsupervised learning.

To examine the utility of unsupervised learning, Zulqarnain and his colleagues collected some basic physiological data from volunteers as they went about their lives. Whenever the research team noticed a spike in heart rate, they would ping the volunteer to gauge how they were feeling. Then they took all that data and passed it through an unsupervised learning model, trying to discover any latent patterns.

“What we saw,” Zulqarnain says, “was while there were patterns that were consistent across individuals, there were also a lot of patterns that were unique to individuals, and they did not necessarily map neatly onto the feelings they said they were experiencing in the moment.”

This kind of structured variance may prove frustrating to the task of pinning down specific emotional signatures in the brain, but it also provides a valuable lesson in how humans leverage data. The key takeaway is not necessarily what was learned, but rather the assumptions around what emotion looks like.

“If we had taken a supervised approach,” Zulqarnain explains, “if we had taken the labels of whatever the participants were feeling in the moment and trained a machine learning model to predict those emotions from sample data, we probably would have gotten some accuracy above chance. And then we might have concluded that, ‘Ohh, there are actually prototypes of the emotions that we will observe.’ But then we would have lost out on all these unique patterns that we would have essentially thrown out with the noise.”

And just what are those patterns telling us? Is noise ever really just noise? Are you sure about that?

Learn more about Zulqarnain’s work here.