In this section of Institute for Experiential AI Executive Director Usama Fayyad’s takeaways from conversations with executives and leaders at VentureBeat Transform, Usama offers thoughts on one of three roundtable discussions.
The long, complex road toward trusted AI
This is a big topic. The session was led by Hilary Ashton from Teradata and resulted in a wide-ranging discussion, with many examples of challenging situations faced by companies. The discussion began with the importance of Responsible AI (an area of focus at our institute) and some of the ethical issues companies are facing. It then moved to regulation and the fact that regulators are far behind when it comes to understanding the technology, potential guardrails, and what they can and cannot do.
This space is hard, and it’s not practical for regulation to lead the industry. I believe our institute's emphasis on human-in-the-loop AI gives a natural framework to address initial regulation through liability: The technology should never make the decision; it should recommend an action or conclusion which should be either accepted, rejected, or adjusted by human experts, and hence a human (or legal entity) should be held liable for the final decision. That’s the right dynamic because it incentivizes the human in the loop to think through the deployment of the technology and the decisions it helps make. This need not slow technology progress - rather it can place regulation as an accelerator. Regulation can actually provide safety guardrails and help the deciding person (or entity) adopt the machine’s recommendations with less risk and more liability protection within the recommended guardrails. Many in the discussion agreed with this view. People using AI want the government guardrails because it can make them less liable for using the technology if something goes wrong.
Interested in learning more? Dive deeper into Usama's VentureBeat Transform insights with his ideas around how the C-suite views generative AI and using generative AI for customer service.