Setting a Foundation for Responsible AI in Healthcare
Although most experts believe artificial intelligence can help clinicians make better decisions, a lot has to happen before its utility can be realized. Otherwise, such systems could actively cause harm. The trick is learning where and when to inject a bit of ethical training.
Trained as a bioethicist, Cansu Canca has been exploring these questions and highlighting the ethical challenges of AI for years. As Director of Responsible AI Practice at the Institute for Experiential AI, she has made it her goal to develop and deploy AI responsibly—that is, in ways that both protect vulnerable populations and advance the state of the art.
More than Mere Bias
At The State of AI in Precision Health on Oct. 10, Canca introduced key concerns surrounding AI’s role in healthcare, including racial and gender biases in AI algorithms used for risk assessment and decision-making. One notable example is a study conducted by Dr. Ziad Obermeyer of the UC Berkeley School of Public Health, which showed how existing AI systems in healthcare were racially biased and, when corrected, resulted in significantly more equitable healthcare for black patients. Similar biases have been observed when studying gender disparities in health.
But Responsible AI is about more than just fixing bias. Canca cited another case involving United Healthcare, where AI recommendations for medical care were implemented incorrectly due to missing or incomplete data—and this, an example of a system that included a “human in the loop.” As she explained, all of the recommendations passed through human decision-makers, but what happened is that physicians’ recommendations were overridden; those who decided whether to follow the physician’s or the AI’s advice, were, in fact, penalized if they didn’t follow the AI.
“So it's not just about what kind of an AI system that we are implementing,” Canca said, “but it's also about how we communicate the AI system’s results and in what kind of accountability structure we are implementing.”
Another layer of concern is the privacy issues raised by AI technologies, particularly medical devices that collect and share medical records with third parties, often without patient consent. Even with data anonymization, there are concerns about the reversibility of de-identification, leading to potential risks of re-identification.
The Definition of Fairness
“So we have a variety of issues,” Canca said. “It's not just about privacy, and you cannot just rely on the laws around privacy. It's not just about bias, although bias is huge, and if you think about how to deal with bias and fairness, you have to get into the very complicated area of, what does fairness mean in insurance versus in acute care versus in decision-making during a pandemic?”
Those decisions, she added, will require social science information and philosophical information about the very definition of fairness. Accordingly, assessing AI systems in healthcare requires a comprehensive ethical framework.
While healthcare professionals may be familiar with bioethics, AI complicates things significantly. Evaluation of systems has to consider data selection, model design, and when and where AI recommendations are to be factored into decisions. And despite their imperfections, AI should be designed to augment, rather than replace, human expertise, with clear accountability structures to ensure that healthcare professionals can understand and collaborate with them.
Finally, Canca stressed the importance of addressing ethical concerns early on in the development process, rather than waiting around until an AI system is already deployed. Addressing ethical problems after deployment is not only more expensive, but less effective.
“What do we want? Well, we want ethics to be integrated. We don't want to be the policing overseers, because this is not the point. The point is to make sure that we work together to make sure that the problems are resolved at every step so that you continue with confidence and without wasting resources.”
Learn more about the Responsible AI practice at the Institute for Experiential AI.