News
Responsible AI

The False Promise of AI Democratization

Behind big tech rhetoric, disparities persist.
By
No items found.
February 16, 2024
Share this post
The False Promise of AI Democratization

By: Ricardo Baeza-Yates

Organizations across the developed world are celebrating efforts at “democratizing AI,” by creating AI solutions for everyone. I believe those promises ring hollow.

You can’t simply give AI away because not everyone has the same opportunity to use it.

Organizations claiming to be “democratizing AI” ignore this reality.

These misrepresentations are not new. Many people claimed the internet was “democratizing information.” But even today, only about 68 percent of the world’s population has access to Internet. Disparities cut across income, age, physical and cognitive abilities, and more, serving to separate the digital “haves” from the digital “have nots.”

Another way people can be left out of technology is language, and that certainly applies to AI. OpenAI’s ChatGPT tool claims to support more than 50 languages, but there are about 7,000 languages still used in the world today.

These disparities don’t mean that all technology is bad, or that new innovations need to support all people equally from the moment they’re developed. Expecting a chatbot to support 7,000 languages is impractical.

My main problem is not with the technologies; my main problem is with our cognitive biases regarding these facts. Because we speak English, we don’t worry about other languages, or put ourselves in the shoes of people speaking other languages. And that tendency to think from only one perspective and ignore other people’s experiences is not just a result of inequality — it’s a driver of it.

Powerful Tools Bring Responsibility

There are many ways in which AI technology is deepening and reshaping the digital divide. For one thing, the pace of AI adoption is unprecedented. AI’s unique capabilities also present new questions.

For instance, as AI technology helps us outsource more thinking, will it change the way our brains work? Humans used to navigate solely by memory and geo-spatial understanding. Today, many in the world use GPS for that work, and it’s making it harder to locate places without a personal navigator in our pockets.

I once saw a meme that said that in five years people will put on their resumes that they know how to write by hand. It’s funny, but if AI helps us write, and writing is how we organize our thoughts, it could jeopardize our ability to think. As AI continues growing in popularity, suddenly people won’t be able to do things without AI that, today, we do easily. Technology is a double-edged sword. I like to say that people who don’t have access to the internet have something we don’t: privacy.

Then there’s the possibility of AI becoming a central repository of human knowledge. This could deepen digital divides in other ways. For instance, if you ask a large language model (LLM) a question in your native language, the LLM will translate it to English, answer the question, and then translate it back to your language to give you the answer. That LLM is basically colonizing the world through knowledge because answers will have mainly American views embedded in them.

If you ask ChatGPT in a European country how many continents there are, the answer you should get back (six) is different from what you get if you ask in the U.S. (seven), because the U.S. splits America into two continents. But if you ask why the Olympic flag has five rings, it will tell you it is because of the five continents that participate in the Olympiads. (Antarctica does not participate.) So its “knowledge” is not even consistent, because in fact it is just predicting the answer and does not really “know” the answer. Now, imagine cultural differences around more consequential topics, like details of history or the nature of controversial subjects like abortion or gun rights. All of these issues intersect with concerns over censorship and human rights.

Consider also the way LLMs respond to queries with definitive, unsourced declarations of fact (as opposed to, say, a web search). That makes it especially difficult to spot implicit biases as well as wrong answers. In many cases, only an expert can spot wrong answers, as happened in Google’s Bard first public demo, when it misstated a fact about the James Webb telescope.

With AI, you have to choose to believe the answer or not. It’s more like faith or religion, and it’s extremely dangerous.

The Responsible AI Approach

All this relates to the work I’m doing with my colleagues at the Institute for Experiential AI. Experiential AI means AI with a human in the loop. If we use AI to empower people instead of replacing people, we’ll avoid a lot of problems.

At the Institute's Responsible AI practice, along with Director Cansu Canca, I lead a team that provides organizations with a comprehensive approach to AI ethics, governance, and legal compliance. We start by assessing your Responsible AI capabilities, risks, and priorities, creating tailored AI impact maps. Then we apply our proven risk management strategies along with customized tools, guidelines, independent consultation, training, and more. Learn more about the institute’s Responsible AI practice or get in touch to learn more about how we help partners.

Implementing AI systems ethically begins by considering broader ethical issues. All these problems arise because of some kind of divide. There are digital, educational, knowledge, language, and economical divides, and with technologies like AI, we’re increasing the dimension and size of these divisions. That’s what we’re trying to stop.

The technologies we make and use are a reflection of ourselves — and the more powerful those technologies become, the greater their potential to cause harm.

Learn how executives and team leaders can create an actionable RAI blueprint for their organizations at our new Responsible AI Executive Education Courses.