While a long time coming, one may feel that we are nearing an inflection point in the implementation of artificial intelligence and machine learning in the financial services industry. A number of years have gone by where many organisations have invested heavily in their own research and development or by acquiring services from 3rd parties to implement AI enabled software. The nature of the beast is that experimentation is vital for success, and many experiments will inevitably fail, but there are usually around 10-20% of experiments that show promise and that have the potential to be truly game changing for the industry.

Interestingly in the US, while the COVID-19 lockdown measures have had hugely negative impacts on hiring across sectors, data science roles in finance and insurance have actually increased (at least in the early stages of the pandemic). This is a sign that the industry still sees these tools and methods as something so compelling and vital for the future of their organisations. However, a time must come when we have to evaluate the successes to understand that return on investment, which is not always so straightforward to calculate in the AI world. This is partly because implementing one set of models today may not make a huge impact on this year’s bottom line (although often it can), but is a necessary step for a total step change in business models 5 or even 10 years down the line. We are already seeing the beginning of this shift in thinking, with the likes of Ki being recently launched, and the recent listing of AI enabled insurer Lemonade. The direction of travel is without doubt a more digital and automated business model that is more scalable, more accurate and on demand. However, in insurance we see a vast range of maturity today when it comes to modernisation with AI, as seen below in an illustrative continuum.



With this evolution we need to understand how such business models are going to be governed, and whether current structures and skills in place are fit for purpose. With increasing reliance on automation and complex modelling, who is accountable under the Senior Managers and Certification Regime, and what diagnostic tools are needed? Are existing Risk Functions, often responsible for understanding the risk around traditional modelling, equipped for this new world?

Within insurance, the idea of model driven decisions is nothing new. Pricing, reserving and capital modelling has always been a major driver of the business strategy, and is a well understood area amongst Risk and Actuarial Functions. It has not been without mishaps in the past with instances of under-pricing or reserving, but there has always been scrutiny around traditional modelling and setting of assumptions, especially where expert judgement is key. With the current trend, however, we are seeing the use of complex modelling across the value chain in areas of the business that traditional risk professionals are not as familiar with. There is either a strong reliance on a small number of technical data scientists to understand these models as well as their use cases, or on 3rd party vendors. But with models being increasingly used in customer facing applications, the responsibility lies with the business if things were to go wrong. I’ve written previously on the potential risks associated with this kind of modelling.

This is where I believe we will see the rise of a new role within financial services – the “Chief AI Risk Officer”. Whether in the short term this falls to an existing CRO, COO or Chief Actuary would be one to debate, but this new role could emerge as one of the most important in a future business model that is largely automated. That is, if one could fill such a far reaching position that would require in depth understanding across operations, business models and strategy, as well as deep technical understanding of AI modelling and their limitations. While independent from the core software and AI development teams, the office of the Chief AI Risk Officer would need the capability to challenge the models, in how they are developed and validated, as well as how they are used in the business. With emerging AI focussed regulations expected over the next few years to add to existing data regulations like GDPR, they will also need in depth understanding of what needs to be done to remain compliant with an array of increasingly complex regulations, guidance and standards. Their functions will also need the advanced AI tooling to be able to analyse core modelling activities, which may entail complex software development initiatives of their own.

Some examples of the scope of such a function:


We have begun this thinking on how to equip the AI Risk Management Function of the future with many tools in development, from analysing models for bias, to utilising more “live” data from automated customer facing processes to build more intelligent Key Risk Indicators and dashboards to fully analyse the value to customers and potential for harm. We believe this is an exciting time to be at this inflection point of AI development in financial services, and the next 5 years of development will herald in a new age for automation and digitally led financial services.