News
Research

ChatGPT's Hidden Bias and the Danger of Filter Bubbles in LLMs

By
No items found.
March 1, 2024
Share this post
ChatGPT's Hidden Bias and the Danger of Filter Bubbles in LLMs

Whether they’re starting a research project, answering a quick question, or just looking for general information about a subject, more and more people are using large language models (LLMs) like ChatGPT instead of Google. That makes it incredibly important we understand how LLMs present us with information.

In a new paper, Institute for Experiential AI Senior Research Scientist Tomo Lazovich, who is also a member of our Responsible AI Practice, shows how ChatGPT skews its descriptions of politicians and media outlets based on the user’s political leanings. The phenomenon could lead to a perpetuation of the so-called filter bubbles seen on social media and in search engines, in which users are shown content that aligns with their viewpoints. Such filter bubbles could have huge consequences for how we form views of the world.

“If LLMs are where search engines are moving to, we need to be very careful about understanding what kinds of personalization might be happening with them,” Lazovich says. “Otherwise, we might not know the information being served to us by these tools is being filtered through a particular lens. That’s the big risk.”

Lazovich doesn’t think ChatGPT was explicitly programmed to present information in a way that aligns with users’ views. Instead, they believe such behavior is an emergent property from training LLMs on internet data and including certain words in the prompt. Still, now that Lazovich has shown ChatGPT skews responses in at least some circumstances, they believe LLM owners should create new protocols for testing and monitoring such tendencies. Users should also understand how the content being served to them is tailored.

“This has the potential to change what information we have access to, change how we consume information, and change our ability to get the full breadth of exposure to different viewpoints that the internet is so good at when it’s not being filtered,” Lazovich says.

Zeroing in on Bias

Social media and search engine filter bubbles often develop when models present content based on users’ past behavior.

“You’re training these models based on user engagement — their impressions, likes, replies, etc. — and people are more likely to engage with things that are already aligned with their worldview,” Lazovich explains. “That means you can fall into these echo chambers where, as the algorithm learns what you engage with, it feeds you more of that content at the expense of information that’s still useful but doesn’t necessarily conform to your views.”

Another effect of personalized recommendation systems is polarization, in which users begin holding more negative feelings toward people or content that opposes their views. For example, Democrats seeing a lot of content aligning with their political views will hold harsher views of Republicans over time.

When Lazovich first noticed ChatGPT skewing its responses, they sketched out an experiment to study the phenomenon more closely. For this study, Lazovich asked ChatGPT 3.5 for factual information about members of the 2018 Senate class, media outlets, and presidential candidates since 2000. In their prompt, they also described the user’s political affiliation.

“I wanted to see if the answers would change,” Lazovich says. “It was the most basic way I could think of to do personalization with large language models.”

Lazovich used online databases to characterize the political leanings of each politician and media outlet. They found that ChatGPT tended to include more positive information — and omit negative information —about entities that aligned with the user’s politics while finding the opposite was true for entities on the other end of the political spectrum.

Responding to Bias

The results were a surprise because Lazovich doesn’t believe OpenAI trained ChatGPT to create filter bubbles.

“I don’t think there was intent or a decision to make responses more aligned with Democrats or Republicans,” Lazovich explains. “It's an effect that’s emerging only from the fact that specific words are included in the prompt. People are also often tailoring content to specific viewpoints, so in hindsight maybe it shouldn’t be surprising that the model has learned to do that. But it’s not obvious that these filter bubbles should arise.”

Social media companies have explored a number of possible solutions to filter bubbles in the past. One approach is the use of so-called bridging algorithms that ensure the model optimizes for more than just engagement. For instance, another optimization goal could be showing a diverse set of viewpoints. Another approach was taken by Lazovich’s former employer X, which uses “Community Notes” to attach crowdsourced information to posts that can provide clarification or context.

Lazovich is currently working with a group of postdocs and researchers to scale up the project and explore how ChatGPT personalizes responses based on other demographics like gender, race, and age.

“When we’re asking questions about known entities — whether people, cities, or institutions — we’re trying to understand more generally if there are variations in the information included in the outputs,” Lazovich explains. “Are they characterized in different ways based on their demographics? Do women get more positive portrayals of women? Do men get more positive portrayals of men?”

The issue is one of many reasons company leaders should be careful when adopting even popular AI technologies into their operations. Those pitfalls are why our institute developed Responsible AI training for executives, with upcoming sessions taking place in Boston, Miami, and Portland this spring and summer.

Perhaps the best way to mitigate the harmful effects of filter bubbles is to inform users when they’re in one. Lazovich believes that should be the first step for companies deploying LLMs.

“Community Notes basically help you see that you’re inside an echo chamber, because it adds qualifications to the information you’re seeing,” Lazovich says. “As LLMs increasingly serve as knowledge bases, these responses are having a big influence on people’s views, without them even realizing exactly how this information is being served to them.”

Researchers like Lazovich work at the intersection of cutting-edge AI research and applied AI solutions for organizations. Learn more about the Responsible AI Practice today.