What Is the Difference Between AI Ethics, Responsible AI, and Trustworthy AI? We ask our Responsible AI Leads

By: Tyler Wells Lynch

AI is everywhere—driving cars, diagnosing illnesses, making credit decisions, ranking job candidates, identifying faces, assessing parolees. These headlines alone should be enough to convince you that AI is far from ethical. Nonetheless, terms like “ethical AI” prevail alongside equally problematic terms like “trustworthy AI.”

Why are these phrases so thorny? After all, they’re just words—how dangerous can they be? Well, to state the obvious, words matter, and if we’re ever to achieve a future where AI is worthy of our trust then we at least need to agree on a common vocabulary.

To explain the differences between these terms and why they matter, we spoke to the co-chairs of the AI Ethics Advisory Board at the Institute for Experiential AI (EAI): Cansu Canca and Ricardo Baeza-Yates.

The Problem With "Trustworthy AI"


For Ricardo Baeza-Yates, who is also the director of research at EAI, it all comes down to a fundamental distinction between human and computational abilities. Artificial intelligence is not human, so we should avoid terms like “trustworthy AI” that not only humanize AI but also imply a level of dependability that simply does not exist.

“We know that AI does not work all the time, so asking users to trust it is misleading,” Baeza-Yates explains. “If 100 years ago someone wanted to sell me an airplane ticket calling it ‘trustworthy aviation,’ I would have been worried, because if something works, why do we need to add ‘trustworthy’ to it? That is the difference between engineering and alchemy.”

Cansu Canca, ethics lead at EAI, adds that “trustworthy AI” seems to direct the attention to the end goal of creating trust in the user. By doing so it circumvents the hard work of integrating ethics into the development and deployment of AI systems, placing the burden on the user.

“Trust is really the outcome of what we want to do,” she says. “Our focus should be on the system itself, and not on the feeling it eventually—hopefully—evokes.”

The Problem With "Ethical AI"


Ethical AI faces a similar problem in that it implies a degree of moral agency. Humans  intend certain ethical outcomes. They can make value judgments and reorient their behavior to account for goals that do not translate to the world of algorithms.

“AI can have an ethical outcome or an unethical outcome,” Cansu says. “It can incorporate value judgments, but it's not an ethical being with intent. It's not a moral agent.”

Ethics, in that sense, is strictly the domain of human beings. Challenges emerge when people start to design systems with autonomous decision-making capabilities, because those systems are only as ethical as the intent of the people who create them.

Responsible AI


Ricardo and Cansu both prefer the term “responsible AI” while acknowledging that it, too, is imperfect. “Responsibility is also a human trait, but law has extended the concept of responsibility to institutions, so we use it in that sense,” says Ricardo.

“In a way, 'responsible AI’ is a shorthand for responsible development and use of AI, or responsible AI innovation,” Cansu adds. “The phrase is still open to the interpretation that AI itself will have some responsibility, which is certainly not what we mean. We are trying to emphasize that responsible AI is about creating structures and roles for developing AI responsibly, and that responsibility will always lie in these structures and the people who design the systems.”

Cansu and Ricardo both see AI ethics as a component of responsible AI. Within that subdomain we find the perennial ethical question, “What is the right thing to do?” And in the larger domain around it we find room for innovation—an exploratory, interdisciplinary space for designers, developers, investors, and stakeholders that ultimately (hopefully) points towards an ethical core.

“We philosophers collaborate with developers and designers to find the ethical risks and mitigate them as they develop AI systems and design AI products,” Canca says.

Such is the mandate of the AI Ethics Advisory Board at EAI—an on-demand, multidisciplinary panel of AI experts representing industry, academia, and government. With philosophers and practitioners alike, the board serves to help organizations anticipate ethical perils without falling into the trap of thinking AI itself could ever have moral agency. 

Find out how the AI Ethics Advisory Board helps organizations address difficult ethical questions during AI planning, development, and deployment. 

Watch Canca and Baeza-Yates talk more about responsible AI, AI ethics, and trustworthy AI in this fire-side chat.

Latest Posts

Inside Intuit’s Quest to Leverage the “Transformative” Powers of AI

Some companies are only just starting to think about AI. Others have already begun to embed it in their operations. Then there’s Intuit, which has been investing in AI for years. The financial technology company, which is known for products like TurboTax, QuickBooks, and Credit Karma, has a history of getting out in front of […]

Six Leading AI Experts Weigh in on the White House Executive Order

Last month, the Biden administration issued an Executive Order (EO) establishing rules and guidelines on the use of artificial intelligence. Through an assortment of benchmarks, appropriations, pilot projects, and reporting requirements, the order’s stated goals are to preserve privacy, protect vulnerable groups, promote competition, and advance civil rights, among other ideals. Our Directors of research, […]

Chegg CEO Announces A New Age Of Learning With Generative AI

Dan Rosensweig laid out a compelling vision for AI to radically transform the education industry at the Institute’s Leading With AI Responsible conference.  In January Dan Rosensweig, the CEO of education technology company Chegg, met with OpenAI CEO Sam Altman to discuss ChatGPT, which had been making waves since Altman’s company released the chatbot […]

AI and the Environment: Is It Enough to Lead by Example?

AI systems are nothing if not power hungry. Researchers from the University of Massachusetts found training just a single AI model can emit the same amount of carbon dioxide as about 63 gasoline-powered cars driven for a year. Another study estimated that emissions from the Information and Communications Technology (ICT) industry as a whole will […]

Peter Norvig Redefines AI Success with Call For Human-Centered Solutions

At the Institute’s Leading With AI Responsibly conference, the industry pioneer made the case that businesses should focus on AI’s broad societal impact as they develop products. Peter Norvig literally wrote the book on artificial intelligence. The California native is the co-author of one of the most popular textbooks on the subject, “Artificial Intelligence: A […]