News
Responsible AI

What’s Next for Responsible AI in 2024?

Responsible AI and AI ethics experts from the Institute look ahead at what's next for the field.
By
No items found.
January 16, 2024
Share this post
What’s Next for Responsible AI in 2024?

Shifting Societal and Regulatory Environment

Last year, AI technology was in the headlines more than ever before. And not always for positive reasons. It was our first full year living with generative AI at our fingertips, like ChatGPT  and Midjourney. We witnessed more major scandals in the real-world use of AI, like lawsuits against health insurance companies  for automatically denying claims and against OpenAI/Microsoft for misusing copyrighted text in their models. Large tech companies continued to dominate the news cycle about AI governance, as OpenAI canned its risk-averse board and Microsoft followed Google’s lead in dissolving (or integrating) its AI ethics team and pursuing alternatives.

Partially in response to these events, high profile summits were held, weighing in on the often-emotional debate over near term ethics versus far future speculation. Governments around the world started to roll out new or more comprehensive regulations, like the EU AI Act and the “US Executive Order on Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence.” The concrete impact of these regulations remains to be seen, but it seems the era of simply posting codes of ethics online is finally over. So what is next for Responsible AI in 2024?

Responsible AI Trends Entering 2024

At the Responsible AI Practice at The Institute for Experiential AI at Northeastern University, we are noticing a few trends as we enter the new year - and simultaneously an ongoing need to return to the basics of responsible innovation.

First: comprehensive AI use and governance. Although AI ethics initially became famous for product/platform-specific bias audits, lone whistleblowers, and short-lived committees, leading organizations now realize the need for more systematic approaches. Deploying AI and, likewise, Responsible AI, demands a strategy that spans across teams and verticals, with new expertise, creative design workflows, and compliance with emerging regulations.

Second: dashboards. AI leaders and data officers are increasingly interested in AI governance software solutions, as the glue that connects teams and managers to monitor AI impacts. These platforms usefully enable the registration and monitoring of AI models, labeling high risk uses, integrating ethically salient metrics, or flagging legal requirements. Though such platforms can help identify problems, they may not be able to solve them or find opportunities to improve AI impact.

Third: trust as an asset. The Silicon Valley style of “move fast and break things” effectively put AI on the map and drew regulators’ attention, yet many of our clients and colleagues have grown their businesses on a reputation for safety, stability, or trust. In those contexts, AI ethics work is not the stereotypical foil to Machiavellian technological disrupters but rather continuous with existing due diligence, risk mitigation, and user/patient-centered design practices.

Making Technology Good --in 2024

While these are all welcome developments, it is essential not to let the rush to deploy AI in every industry distract from the basics of good technological design. Lasting and impactful innovation requires:

1. anticipating consequences and uses of a technology,

2. reflecting frankly on its motivations and underlying values,

3. proactively collecting data and perspectives from diverse stakeholders, and

4. adapting the technology using what has been learned.

“When I taught technology ethics courses as Professor for Responsible Innovation, my overarching prescription was always to do the hard, human work that is required to adapt a technology to its context of use,” AI Ethicist in the Responsible AI Practice Matthew Sample says. “The lesson is the same even for companies pushing the limits of technological possibility.”

For AI, the process can take many forms, including conducting socio-technical research on model impacts, creating bi-directional channels of communication with users, honing iterative design methods, and hosting multi-stakeholder conversations about tough value concepts like health, privacy, or fairness. These things take time, and there is no simple formula for balancing tech agility against the need to carefully measure and improve human impact for a new product or platform.

As the best – and the rest – in industry attempt to tackle this challenge in 2024, we hope to see Responsible AI and Responsible AI methods continuing to mature, ideally with their own success stories to make headlines. For this reason, industry leaders and AI ethics experts alike will need to respond to the moment – adapting to new laws and fast-moving tech advancements – while also steadily building capacity to do the day-to-day work of responsible innovation.

Learn more about RAI and our RAI Practice here.