Insurers Don’t Just Need AI. They Need Responsible AI.

Insurance and responsible AI.

By: Miklos Mattyasovszky and Kevin Sanborn

Insurance companies make high-stakes decisions that impact lives and livelihoods every day. As artificial intelligence (AI) becomes a bigger part of decision making processes–particularly in critical areas such as underwriting, pricing, and claims processing–understanding the models can become difficult for insurance companies. Insurance companies that wish to thrive now and in the future must integrate Responsible AI into their business strategy to effectively harness the capabilities of AI without causing harm to individuals or communities.


A Responsible AI strategy includes guidelines, processes, and skilled human resources that enable ethical and robust development and implementation of AI in the insurance sector. Responsible AI practices reduce unjustified biases, increase the explainability of insights, and make the integration of AI more trustworthy so that human decisions can be effectively and efficiently supported by data.

Prakash Raghavan, assistant vice president of digital strategy and incubation at Unum Insurance, a benefits provider and insurer in the disability, life, accident, and critical illness space, believes Responsible AI is a core competency for any organization that strives to use AI to its full potential. “AI and AI ethics experts can help to bridge the gap and bring in external perspectives so that the development and the implementation of the AI strategy can benefit from the learnings and best practices across the industry and can protect from any possible legal or customer experience implications,” Raghavan says.


Insurance leaders commonly associate Responsible AI with compliance and risk avoidance. But as AI is integrated into key business processes, Responsible AI becomes essential to innovation and competitive differentiation. In many cases, model transparency and explainability will be required by regulators in order to gain approval.  A strong Responsible AI practice can unleash the power of data—and the power of people—to deliver insights that guide strategy, improve operations, develop the best products, understand customers, and help you stay ahead of the competition.

Raghavan stresses the need for a comprehensive approach to AI.

“Building an AI strategy in these times has to be a multi-dimensional effort,” he says. “To stay ahead in the market it needs to consider the industry-wide trends, current and upcoming technologies, strengths and weaknesses of the competitors, and understanding the current tech and data landscape of the organization.”


When insurance companies cannot trust AI models, they may be forced to adopt conservative strategies that limit innovation. Inaction can expose organizations to mistakes and leave them without a strategy to cope. For proactive companies, on the other hand, a Responsible AI strategy propels AI innovation that creates competitive advantages.

“Avoidable errors in technologies make customers lose trust in companies and seek an alternative,” says Cansu Canca, ethics lead and research associate professor at the Institute for Experiential AI at Northeastern University. “When a company continues to make errors, their competitors may turn their ‘responsible’ approach to innovation into a market value.”

Raghavan from Unum highlights the need to ensure data sources and any AI models have been audited for bias.

“Bias in an AI model can make or break the success of the model,” he says. “Existence of bias in a model will make it a challenging task for any model to perform and pass through such stringent performance testing through the implementation process.”

The use of human experts to review, analyze, and if necessary adjust model output is essential to the practice of Responsible AI. The human in the loop is critical, says Canca, “for the same reasons why financial services or insurance companies do not completely hand operations over to AI: The AI is not sophisticated enough that we can rely on it.”


Most insurance leaders know that data and AI will have to be a core part of their growth strategy in the next ten years. Adopting new technology too slowly can be just as damaging as adopting it too fast. Companies that rely on outmoded methods of pricing, claims processing, product development, and improving customer experience lose out to companies that use AI to power their business. 

That does not mean that the risks associated with black-box models, biased data, and lack of understanding will disappear. They need to be accounted for and incorporated into a comprehensive framework. However, these AI risks are typically poorly understood by insurance companies, leading to conservative decision making. 

“More often organizations take a myopic view considering the current constraints within the organization and fail to explore and challenge the boundaries,” says Raghavan. 

But that view doesn’t have to be the norm. While such risks cannot be eliminated, a Responsible AI roadmap can help insurance companies plot a growth strategy that successfully navigates novel challenges in AI ethics and compliance.

Hear expert insights about AI for insurance leaders - join our webinar on June 14.