How do the World’s Most Popular Algorithms Lead to Bias?
Tomo Lazovich has seen firsthand how fickle big tech companies can be when it comes to Responsible AI. Before joining the Institute for Experiential AI as a senior research scientist on the Responsible AI team, Lazovich worked in the Machine Learning Ethics, Transparency, and Accountability (META) team at Twitter (now X). Although they departed as part of Elon Musk’s well-publicized shakeup, Lazovich got a behind-the-scenes look at how algorithms influence hundreds of millions of people’s lives every day.
At the institute, Lazovich is broadening that perspective with a series of research projects aimed at deciphering the real-world impact of the algorithms powering social media, generative AI chatbots, and internet search platforms.
“The Responsible AI team engages with industry partners who want to better understand their models, and there my experience is valuable because I’ve been inside a large company and seen how the sausage gets made,” Lazovich says.
In a private company, research projects need to align with the organization’s profit goals. As part of the institute’s Responsible AI team, Lazovich is free to explore the aspects of AI that have the largest consequences for humanity.
Harmful models
Lazovich’s primary interests are in building new tools and finding alternative data streams to understand how social media companies, including their former employer X, determine who and what should get attention.
In a recent essay, Lazovich and University of Chicago Professor Kristian Lum argued social media research is often limited by an overly simplistic view of the systems being studied. Content feeds aren’t based on some isolated “master algorithm,” Lazovich says. Instead, many factors influence what ends up on people’s screens, such as manual content moderation, search histories, workplace restrictions, and user behavior. Researchers also often fail to clearly define baseline or “neutral” algorithms they’re trying to compare social media algorithms to. All of those problems lead to false or misleading research conclusions.
Lazovich proposes more practical ways to address concerns about algorithmic content amplification. Social media companies and researchers could focus on minimizing views of harmful content and eliminating bias against individuals and groups. That work could be done while using audits to ensure the companies are still delivering content users actually enjoy. Lazovich proposes two new metrics for further study: algorithmic exposure (in the context of harmful content) and algorithmic inequality.
“I call this approach operationalizing AI,” Lazovich says. “How do we take concepts that have been in the literature forever and turn them into things that people can really use in practical — often industry — settings?”
Another area of Lazovich’s research is focused on generative AI systems like the popular ChatGPT chatbot. If you tell the system you’re a liberal or conservative, and then ask it questions about the world, does it cater its responses to your demographic? Lazovich’s preliminary data indicates it does. The answer has important implications for the hordes of companies training ChatGPT on customer data.
A third area of Lazovich’s research looks at how Google’s search results impact health determinants like access to food. Many non-medical factors, such as access to housing, education, and jobs, have been shown to predict a person’s health. Along with researchers from the D'Amore-McKim School of Business and Bouvé College of Health Sciences at Northeastern University, Lazovich is looking at how people in food-insecure areas use Google to find food.
“The goal is to look for bias and disparity, to understand if Google is worse at servicing Black communities compared to white communities for these resources, or poor communities compared to rich communities,” they say.
Squashing myths
As a graduate student at Harvard University, Lazovich took part in high-profile research that led to the discovery of the long-sought Higgs boson particle, an integral part of a theory that explains the fundamental particles of the universe. The work exposed them to artificial intelligence, and Lazovich used that experience to transition to industry. But after three years of working at a fast-growing startup, Lazovich shifted focus yet again to explore ways to make AI more trustworthy at X.
“At some point, I realized I wasn’t interested in only working on technical problems,” Lazovich says. “I wanted to understand how the things I was building were affecting people. That’s what led me to the Responsible AI space.”
Now that they’re dealing with ethical frameworks on the Responsible AI team, Lazovich is hoping to dispel the notion that adopting Responsible AI practices will hold the industry back.
“People think Responsible AI and ‘standard’ AI are in competition with each other, but often the things we call Responsible AI are also just good engineering and science practices — things like making sure you’re formulating the right question, understanding what’s in your dataset, making sure the ways you’re evaluating your system are actually aligned with what you’re trying to accomplish,” Lazovich says. “It’s a false dichotomy. There are a lot of things that fall into the Responsible AI domain that will actually just make your model better and more robust. Wherever I go, I try to squash that myth.”
Overall, Lazovich’s chief mission is to facilitate a shift in thinking among AI practitioners similar to the one that led them to the field of Responsible AI in the first place.
“The average machine learning engineer tells me they don’t feel qualified to think about these issues,” Lazovich says. “But you don’t need to understand that your approach is being derived from some specific justice philosophy to take up practices that will make you a more responsible researcher.”
Now that’s one message Lazovich wishes would go viral.
Learn more about the institute’s Responsible AI practice.