Why We Need Responsible AI for Business

By
No items found.
May 19, 2023
Share this post
Why We Need Responsible AI for Business

AI’s rapid growth is due in large part to its utility in business settings. Risk assessment, process automation, financial analysis, and medical diagnostics are just a few examples of AI’s extraordinary predictive powers. But AI’s ever-evolving hazards are just as serious as its opportunities. Biased outputs, strategic misalignment, IP infringement, reputational attacks, and factual “hallucinations”—these risks underscore the need for a Responsible AI framework that secures functionality and efficiency while protecting against ethical, legal, and reputational risks.

Here are five things to know about Responsible AI for Business from EAI Experts

“As firms deploy AI, they can embed unintended biases, create new risks, and potentially break the law without realizing it. Responsible AI starts with risk assessments, remediation of risky algorithms, and creating an awareness of the potential issues. It allows firms to adopt the technology while minimizing the risks and demonstrating to regulators and the public that they are worthy custodians of the trust we place in them. At EAI, we are working with several financial firms on enabling them to take a responsible approach to innovating with AI. Through these experiences we help our partners do what’s right while we evolve the proper training curriculum for future practitioners of Responsible AI.”

—USAMA FAYYAD, EXECUTIVE DIRECTOR

As tech giants loosen their oversight of Responsible AI, the implications on health and healthcare sectors are profound. AI algorithms risk being inadvertently developed with inherent biases that may further go unchecked due to thinning ethics teams at such organizations. The result will be a poorer product and less customer trust. However, these challenges also present an opportunity for healthcare organizations to develop their own Responsible AI oversight committees to protect patients, optimize utilization, and increase trust and transparency, in much the same way that such organizations have research oversight committees to protect human participants from harm in research studies. The same can be done in the commercial sector.”

—GENE TUNIK, DIRECTOR OF AI + HEALTH

“The volume and complexity of life sciences data are beyond what humans can process without AI. From feeding our growing populations sustainably to achieving real progress in precision medicine, advancing life sciences research requires coupling wet lab research and AI. Despite its potential, current research and data in the life sciences are heavily biased and un-equitably distributed. Additionally, the risks associated with unintended uses of these high-resolution data grow as the data sets grow. The Institute for Experiential AI has a tightly integrated ethical and responsible AI practice, which cuts across all of our subject matter verticals, including the life sciences. We are uniquely positioned to integrate ethics into our wet-lab-in-the-loop AI work to improve the human condition equitably and responsibly.”

—SAM SCARPINO, DIRECTOR OF AI + LIFE SCIENCES

“Climate change has been called the defining challenge of our age and an existential crisis for the insurance sector. However, progress in accounting for climate risks has been slow. It has taken a lot of effort to convince the insurance sector that Natural Catastrophe models need to account for climate risks more explicitly. This is where hybrid physics and AI models in climate can make a difference. Along with risk modeling, AI can help inform decisions and policies, bringing large scale global climate model outputs down to the scale of stakeholder decisions.”

—AUROOP GANGULY, DIRECTOR OF AI FOR CLIMATE + SUSTAINABILITY

“As someone who was part of an internal industry ethics team that was cut, I feel for my colleagues at other big tech companies. I think many of these layoffs are coming because companies see Responsible AI teams as ‘expendable’ in a tough economic climate—they’re perceived as not contributing to the bottom line. This is one of the biggest misconceptions in the field. Many leaders seem to think you can build systems to be performant or you can build them responsibly, but not both. In reality, many of the safeguards and best practices that Responsible AI teams put in place can both improve performance of the system and ensure more just outcomes for users.”

— TOMO LAZOVICH, SENIOR RESEARCH SCIENTIST

Read more from the Institute and see our events. Learn about our AI Solutions Hub, Responsible AI Services, on-demand AI Ethics Advisory Board, and discover business opportunities.