While it is perceived as a slow moving industry when it comes to innovation, insurance is in fact leaping ahead many other industries when it comes to experimenting with artificial intelligence (AI) and machine learning. Of course, using data to gain insight is nothing new for insurers which have a long history of using statistics and actuarial science for pricing, reserving and risk management, but we are seeing a huge uptick in using newer modelling methods and non-traditional data to solve problems and increase efficiencies across the wider value chain.

While some models are being utilised to upgrade traditional actuarial models or at least provide alternative proxy models, other examples include helping underwriting decisions using less traditional data like a customer’s behaviour on a call or website, to using more automated techniques to increase data quality and efficiencies across legacy back office systems.

We are at the point now where there is no doubt that using AI appropriately will be a game changer in an increasingly competitive market, and we expect many of the proof of concepts and experiments over the last few years will go into live production systems in the coming year. However, there is increasing concern from 2nd line functions that these models and their use cases have not had the appropriate review from a risk perspective.

The fast moving nature of innovation means that existing risk frameworks, processes and skills are often not well suited to validating and risk assessing AI. There is also often a communication gap between data science teams in insurers, who often don’t have the deep industry knowledge, and risk and actuarial teams, who don’t necessarily have the specialist skills or mindset required to evaluate new models that can be quite different to past traditional models. Adding the evolving EU regulatory landscape to this – especially with many new developments expected in 2020/21 – risk management and compliance in relation to artificial intelligence is only going to become more complex and onerous.

To bring this to life, we have selected a few example use cases and risks we are seeing now entering the market across all lines of insurance:

Value Chain Component

Use Case

Potential Risks

Pricing and Reserving

A multi-dimensional, complex model (e.g. a deep neural network) is used to find correlations between existing customer features and their claims history. This will inform reserving and future pricing. The model uses many known features of the customer, including personal details and past behaviour. This might include driving behaviour gained from telematics devices for motor insurance, or health data from similar past policyholders for longevity and health modelling.Model complexity and opaqueness of some methods can make them difficult to understand and validate.
Possibility of unintentionally basing decisions on a protected characteristic (e.g. gender, race) that is correlated to a set of features that are used in combination (i.e. proxy variables) – e.g. a particular type of car, location or past behaviour may strongly correlate to a single protected characteristic.
Potential regulatory breaches, including GDPR, use of personal data.
With increasing personalisation and specific risk assessments, the possibility of finding certain customer demographics more risky, and thereby pricing them out of the market vs current risk pooling.
With more specific and complex customer risk models, these are likely to drift over time or after extreme events that were not factored in, hence providing a false sense of accuracy.


An advanced model is used to make underwriting decisions, using a variety of online information, aerial photos and other resources (e.g. in commercial property).Due to the heavy reliance of data in driving the model and therefore underwriting decisions, misleading data or incorrectly classifying characteristics, e.g. construction materials, results in taking on business unknowingly outside of risk appetite (i.e. poor data governance and validation, needing a human in the loop).

Customer Servicing and Interaction

A model incorporates personal and activity information, captured from phones and smart watches, to analyse the health of the policyholder, and implement “nudging”, whereby the policyholder is incentivised to be more active to improve long term health.Similar risks in using personal information as for the first example.
Additionally, while nudging techniques may be mutually beneficial for the policyholder (increased health) and the insurer (reduced claims), there is concern over whether this is fair to consumers who do not want to share such personal information.
Chatbots are used widely for customer interaction, whether for selling or renewing a policy, making claims or complaints.Chatbot behaviour can vary widely depending on how they are trained, and can lead to unintended outcomes and customer frustration, impacting brand and reputation.

Claims Handling

Advanced fraud models, making use of a wide variety of customer and behaviour data (such as voice interactions) to identify claims fraud or risk of providing false information.Imbalanced nature of the data (very few true fraud events vs many non-fraud events), and high complexity results in poor understanding of the levers of the model, resulting in incorrectly targeting certain demographics or behaviours.

Many Risk Functions currently feel ill-equipped to properly validate and assess their company’s use of artificial intelligence, especially with regards to mitigating such risks. However the industry is moving to build the tools and frameworks necessary to alleviate this concern, as well as ensuring they keep up to date with fast moving regulations. It is as important as ever to ensure there is a balance between innovating fast, but also innovating with confidence, where there is no potential for poor customer outcomes.

Other Interesting Reading:

EU Ethics Guidelines for Trustworthy AI

ICO AI Auditing Framework

Design Principles for Ethical AI