Can Social Media Be Fixed? In Conversation With Four EAI Researchers on the “Insane Profitability” of Algorithms
Information bubbles, confirmation bias, dehumanizing rhetoric—the most toxic aspects of social media are familiar to anyone who has ever scanned a Twitter thread or Facebook page. But we have only recently begun to understand the role that AI plays in stoking all that toxicity.
In a recent paper published by the Cambridge University Press, Institute for Experiential AI (EAI) researchers Kenneth Church, Annika Schoene, John E. Ortega, and Raman Chandrasekar surveyed the literature around social media and the algorithms that drive it. Rather than focusing squarely on biased or unfair algorithms, the paper, which quickly shot to the top of Cambridge University Press’ “most read” list, examines an emerging risk landscape that includes insurrections, misinformation, political polarization, and even genocide. At the heart of it all is the beating pulse of what the authors call “insanely profitable” algorithms.
We sat down with Ken, Annika, John, and Chandra to talk about why the paper resonated with the scientific community, how important it is for AI practitioners to see the social impact of their work, and whether the “Frankenstein’s monster” that is social media can ever be contained.
You can learn more about the research team at EAI and what they’re working on here.
***
You use the terms Risk 1.0 and Risk 2.0 to refer to two different classes of risks that social media poses to society. Can you define each?
Annika: Risk 1.0 effectively talks about any kind of AI bias related to protected characteristics—things like gender, sexuality, nationality, and socioeconomic status. While these things are very problematic and have real impact on real people and can be systematic, Risk 2.0 is a lot bigger in terms of the impact it has. We are talking about things that can affect whole nations, whole governments, et cetera.
Ken: Genocide, insurrections, polarized politics…
Annika: Yes, and there are things that are in a gray area between Risk 1.0 and 2.0—for example, in hate speech or abuse towards people based on race, sexuality, or gender.
You also make clear in the article that Risk 2.0 is not meant to supersede Risk 1.0, but rather to be discussed alongside Risk 1.0.
Annika: It’s really about raising awareness that 2.0 exists, and that there are things in Risk 1.0 that influence 2.0 and that we need to go beyond that. Even within Risk 1.0, we have a really uneven distribution of what people are focusing on. There is more focus on specific gender issues, for example, when we should also be talking about age, race, ethnicity, socioeconomic status, and the things that influence, for example, insurrections—why people are going out and taking up arms. They're not separate; they're connected. Not one is more important than the other, but the latter one, Risk 2.0, has received less attention.
I like how the article allegorizes this unholy union of machine learning and social media as a Frankenstein monster. Can you summarize what you mean by that?
Ken: One of the things we tried to do was repeat positions that others were saying and not so much to editorialize. The Frankenstein monster was a reference to a book by Mark Bergen (Like, Comment, Subscribe: Inside YouTube's Chaotic Rise to World Domination) and a review of the book which characterizes YouTube as a creature whose makers have lost control. I think what they're talking about is the idea that—perhaps on purpose, perhaps accidentally—the community put these social platforms together, and then it escaped into the wild and took on a life of its own. Some of the discussion of our article on social media has been questioning whether the problem is the recommender algorithm or the virality of the way gossip spreads. Really juicy gossip will get repeated, and it's maybe not the algorithm that's causing it to spread—it's just that if it's too juicy it spreads on its own. The technology has made it easier for juicy gossip to spread, and gossip is juicier if it's not true, so fact-checking gossip is a waste of time.
There are two really interesting examples from the article of success stories in fighting the Frankenstein monster: One is Wikipedia's nonprofit status, which is often seen as a reason why it’s less toxic than social media sites. Another is Reddit under former CEO Ellen Pao, who made an effort to eliminate the site’s most toxic users, which is interesting because Ellen Pao was ultimately forced out of that position for doing pretty much exactly that. But it raises the question, what would need to happen for social media companies to overhaul their processes and address these problems—be it Risk 1.0 or Risk 2.0?
Chandra: There’s a third example in face recognition. When people found that black faces were not as well recognized by facial recognition algorithms it created such a stir that companies had to buckle down and fix it. Another example is in the city of Seattle, which added a tax to sugar-sweetened drinks. If you sold or bought Coke, for example, then the distributor or ultimately you had to pay slightly extra. This tax led to a perceptible decrease in the consumption of sweetened drinks. I'm actually very optimistic that incentives like these could work when applied to social media.
Ken: A proposal that's not in our paper but I've been thinking about is some kind of regulator like the Federal Reserve. The Fed changes interest rates depending on metrics they get, like inflation or unemployment, and they can change the money supply based on those metrics. I'm wondering if we could have some other metric about the good and the bad on social media. There's a certain amount of benefit that's done by social media, and there's also some toxicity. You could imagine having some kind of index like inflation or unemployment that would measure the good and the bad, and a regulator who could increase or decrease taxes or some other incentive in order to handle the trade-off between them.
There's also a stance in your paper that regulation is not enough. Can you explain what you mean by that? What else is needed to mitigate the risk landscape on social media?
Ken: One concern I have—and I think this plays differently in Europe than America—is that in America there's a zero sum game. If regulation is seen as favoring one party over the other then the party that benefits can block any changes or make them ineffective. I think it would be very difficult to get this through in the United States given that reality. But I think this plays differently in Europe, where there isn't a two-party system and there's more concern about these issues and more willingness to take them on. I also think we need to get to a point where peer pressure takes a different view about toxicity on social media.
Which basically means it's a cultural concern, and culture moves very slowly.
Chandra: I'm optimistic inherently, but in this particular place you see people at major companies complaining about their own policies. People at Facebook (Meta), Google have asked their companies to change policies about how they handle the social media aspects of their business. There are also people from the public complaining about these things as well. Eventually, things will reach a tipping point, and I think that process is becoming faster. People are forced to change their attitude because advertisers are dropping off. There are economic pressures, there are people pressures, there are regulation pressures. People will find out that what we expected from social media didn't quite deliver, so the system hopefully will correct.
Your approach in this article wasn’t to contribute to all these intersecting fields but to survey them. From that high-level perspective, what would you say was the most important takeaway?
Annika: In the context of Natural Language Processing, the field is just developing, so it’s to be expected that we have somewhat limited knowledge when it comes to risks like bias or addictive algorithms. But unless there is intervention from some kind of regulatory watchdog—like there is for media and entertainment in the UK—then I don't think the profitability will change and I don't think the way social media companies work will change. At the end of the day, social media is about engagement and having users, and if your metric internally says that gossip keeps people engaged, then that's what’s going to happen.
Ken: A lot of the work on Risk 1.0 is authors writing about issues that are very real to them, and it's hard to imagine how these things would work at other points in time and space and culture. There was a lot of discussion of the riots in Sri Lanka, which Tamil speakers are on the receiving end of. I suspect that this article relates to lots of people for lots of different reasons, but one of the opportunities is to move away from the concerns that would affect the authors and researchers to the things that would impact people in other places with other perspectives, because we as researchers are not doing a good enough job thinking about the consequences of what we do.
That was a very arresting comment in the article: the anecdote about Sri Lankan leaders struggling to get in touch with Facebook about moderating misinformation during the riots, and then Facebook only getting in touch when they saw traffic dip. That's pretty alarming.
John: From my point of view, I think it is important for this article to have a holistic viewpoint. Several of its authors came from or have lived in different countries. In my experience, when one lives in Europe, one tends to believe that the government will address social issues such as overeating, consumption, and more – and they mostly do take care of those kinds of things. In the U.S., I do not believe that we want government interference (at least that is what voting patterns seem to show). Therefore, in my opinion, Americans could use a wake-up call of this type when it comes to AI in our lives.
Chandra: I lived in India for about 30 years before I came to the U.S., and I lived in China for six months more recently. It's really interesting to me that in India they do blanket bans on certain platforms like TikTok. Tiktok is banned in India. When there were problems in Kashmir the government just stopped all internet traffic there; those things are very concerning. You may talk about using social media to stir an insurrection, but you also have to think about the other side—how governments are using social media or, specifically, the absence of social media, to stop people from protesting.
The idea of the information bubble has penetrated the discourse enough to the point where most people know what it is. People online even use it as a rhetorical device against people they disagree with. So we have this meta-discourse that's trying to pull us away from information bubbles, but somewhere there is a bubble that is true, or at least contains elements of truth. Does it worry you that this new stage of discourse may be overly concerned with misinformation bubbles, and that something may be lost in the attempt to pop these bubbles?
Ken: If you take the cliche of the post-truth world, there was a time when truth mattered but now we're beyond that. I tend to be more optimistic. I tend to think we're living in an interim period, which will be remembered like the Wild West. There was a time when there used to be gun fights all over the place, but that was bad for business. People like to feel safe when they walk on the streets. If nobody feels safe on the streets, if nobody can believe anything they hear, if people feel happy repeating stuff they know to be false, then that’s not good for society, not good for business, and not good for people. This period we're in right now is very brief, and I think you can even see a backlash to it now that all the people who were pushing the “big lie” were mostly unsuccessful in the last round of elections. It worked for a little while but it won’t work for long. The post-truth thing can work briefly, but it doesn't have a long-term future.
Chandra: When I find myself in times of trouble, it's not Mother Mary who comes to me but the national motto of India, which seems apt, naive, yet optimistic. India's motto is “Satyameva Jayate,” which means “truth alone triumphs.” I read that as “truth will ultimately triumph.” The wheels of justice grind slowly and all that, but ultimately they grind the right way. We need to nudge it along and we need to do our bit.
Annika: All this being said, I don't think any of us underestimate the real effects social media currently has on people. There are obviously terrible things happening because of it, but even if it’s temporary it doesn't negate everything horrible that's going on. I think we see this especially in younger generations with people who are more active in protesting, holding up a mirror to society to say, “This is really not great.” For example, in the last election the promised red wave that went all over social media did not happen. I think there's still a very stark distinction between what is going on online and what is happening in reality in terms of facilitating change.
Ken: In historical linguistics, they talk about language contact and how the languages are not all unrelated. There are migrations and borrowings and so on—and not just with words. It can be with ideas and politics and views and empathy, and one of the concerns I have with Covid is that there's been less contact and more social media. I think that’s a dangerous mix. You're hearing things being said about the “other” that are pretty frightening. While Chandra mentioned TikTok being blocked in India, there's talk about blocking it all over the place. And the reason for blocking it doesn't seem to be because there’s as much toxicity as Facebook; it's more that we trust American companies and don't trust Chinese companies. And that doesn't sound to me like a good argument. What I would like to see is more contact and less gossip, and the countermeasures we take should be addressing the issues and not being used as an excuse for something else. It feels like the toxicity could be used to crack down on the “other” for some reason. And I worry about this kind of pivoting where we talk about Risk 1.0 as an excuse for not talking about Risk 2.0.
***
Explore and learn about our research goals, approach, areas and work.