Why We Need Responsible AI


In recent months, Twitter, Microsoft, Amazon, Google, and Meta have all eliminated or reduced their AI ethics teams. The timing is odd given the competitive frenzy kicked off by the release of ChatGPT. After Microsoft’s $10 billion in OpenAI (makers of ChatGPT), Google hastily released its own chatbot, Bard. And last month, Meta unveiled LLaMA, a 64-billion-parameter language model that, unlike ChatGPT, is open-source. The explosion of R&D alongside mass layoffs seems to indicate a belief among tech giants that AI ethics and AI innovation are mutually exclusive. But is that true?

Here are five things to know about Responsible AI from EAI Experts

“While we are all fascinated with the apparent eloquence and ‘fluency’ of chatbots like ChatGPT, it is important that we do not confuse these with ‘intelligence’—we are far from systems that have a semantic understanding of what they are saying. We are also far from systems that have reasoning capabilities—including common-sense reasoning—which remains elusive for machines and strictly in the domain of humans. In such an environment, it is particularly important that we create guardrails for how to use the technology while avoiding serious ethical issues. It is disheartening to see the big players in AI disassemble their ethics teams, but it serves as motivation for EAI to double-down on its own Responsible AI practice, and on delving deeper into what it would take to develop trust in AI.”


Industry cannot ask for self-regulation and at the same time get rid of their AI ethics teams. It’s a complete contradiction. I am worried about the near future, but not an apocalyptic one. Now we can generate videos with the right face, right voice, and my false message. So it will be very hard to distinguish what is true. Not even a Zoom call may help in the future, as there will be perfect avatars of us. If we do not do something, democracy might be in danger. We need to regulate the unethical usage of AI. We need to stop irresponsible AI, but we do not know how because it is not trivial to enforce


Ethical decisions and value judgements are inherent parts of the innovation process, and they are necessarily made on a day-to-day basis. Dismissing ethics only means that these decisions are more likely to be misarticulated, misguided, opaque, and untraceable. It is interesting to wait and watch how the market will react to the actions taken by the tech giants. Will other companies follow suit and race to the bottom or will they take the opportunity to offer better products? Ethics is not an abstract ideal. Consumers—especially institutional consumers like the healthcare, public safety, or finance sector—prefer ethically robust products as they reduce regulatory and reputational risks and provide an edge for them against competitors.


With unprecedented levels of data becoming available and more powerful AI tools helping us make sense of that information, organizations that produce AI need to focus on how to use these tools as a force for human empowerment. The tools don't have to be perfect for us to seize the opportunity to do better than our current systems. Unfortunately, we seem to be going in the opposite direction or, at the very least, not putting in place the personnel and processes to ask and answer: How might we use these tools for good? Discussion about the far-off future or the robot apocalypse may be obscuring attention towards how we can leverage the technology to design better mechanisms for participatory oversight.


Generative AI models are full of biases. They do not have access to verifiable information, so they can easily suggest false and even dangerous advice, not to mention they may provide false answers presented confidently as factual information. We are in the midst of another steam-engine-and-horseless-carriage-like shake-up, and it will take some time until we figure things out. Yes, we should be careful, cautious, and even a little concerned. And we should not accept the tech giants’ dismissal of addressing responsible and ethical concerns of the use of AI. At the same time, we should not be alarmists. The combined wisdom of academics, researchers, ethicists, and concerned decision-makers will, in the end, figure out how to do this right.



Read more from EAI and see our events. Learn about our AI Solutions HubResponsible AI Services, on-demand AI Ethics Advisory Board, and discover business opportunities.

Latest Posts

Inside Intuit’s Quest to Leverage the “Transformative” Powers of AI

Some companies are only just starting to think about AI. Others have already begun to embed it in their operations. Then there’s Intuit, which has been investing in AI for years. The financial technology company, which is known for products like TurboTax, QuickBooks, and Credit Karma, has a history of getting out in front of […]

Six Leading AI Experts Weigh in on the White House Executive Order

Last month, the Biden administration issued an Executive Order (EO) establishing rules and guidelines on the use of artificial intelligence. Through an assortment of benchmarks, appropriations, pilot projects, and reporting requirements, the order’s stated goals are to preserve privacy, protect vulnerable groups, promote competition, and advance civil rights, among other ideals. Our Directors of research, […]

Chegg CEO Announces A New Age Of Learning With Generative AI

Dan Rosensweig laid out a compelling vision for AI to radically transform the education industry at the Institute’s Leading With AI Responsible conference.  In January Dan Rosensweig, the CEO of education technology company Chegg, met with OpenAI CEO Sam Altman to discuss ChatGPT, which had been making waves since Altman’s company released the chatbot […]

AI and the Environment: Is It Enough to Lead by Example?

AI systems are nothing if not power hungry. Researchers from the University of Massachusetts found training just a single AI model can emit the same amount of carbon dioxide as about 63 gasoline-powered cars driven for a year. Another study estimated that emissions from the Information and Communications Technology (ICT) industry as a whole will […]

Peter Norvig Redefines AI Success with Call For Human-Centered Solutions

At the Institute’s Leading With AI Responsibly conference, the industry pioneer made the case that businesses should focus on AI’s broad societal impact as they develop products. Peter Norvig literally wrote the book on artificial intelligence. The California native is the co-author of one of the most popular textbooks on the subject, “Artificial Intelligence: A […]