The pandemic has strengthened the business case for Artificial Intelligence (AI) in financial services (FS) to drive efficiency, competitiveness, and better service in the face of fast changing consumer behaviours. But COVID-19 has also highlighted that as the adoption of AI in FS increases, so will the risks associated with it. As set out in our Financial Markets Regulatory Outlook, in 2021 we expect supervisors to put increasing pressure on firms to demonstrate that their use of AI is trustworthy: robust, compliant and ethical.
COVID-19 has underlined the importance and benefits of AI. Banks that used machine learning (ML) were able to process high volumes of UK government-backed loans applications with increased operational efficiency. And according to a Bank of England (BoE) survey ~50% of UK banks expect an increase in the importance of AI for future operations as a result of COVID-19.
But the pandemic also evidenced that AI governance and risk management remain a challenge for many firms. Changes in economic conditions, consumer behaviour and banks’ internal processes led to an appreciable deterioration in performance of some ML models. In the same BoE survey 35% of UK banks reported that COVID-19 had had a negative impact on their ML models.
The lower model performance was linked to two types of model drift. The first – data drift - was caused by a change in the distribution of data used in a predictive task. For example, Natural Language Processing models used to respond to customers’ queries may have been trained on specific collection of texts. Major changes in vocabulary, and the meaning and/or significant of words caused by the pandemic (e.g. working from home, isolate, furlough) may not fit into the patterns previously learned by the model. The second – concept drift – was caused by changes in the underlying processes. For example, if collections processes change then the collections and recovery models may require adjustment.
Model drift is not a new concept and has long been a focus for FS supervisors in their scrutiny of “traditional” models. However, data and concept drift may be harder to identify and manage within an AI environment. This will depend on the complexity of the models used by a firm and the degree of familiarity it has with AI. But for most firms, deploying AI will require enhancements to their governance and model risk management frameworks to ensure they remain robust, especially in relation to model ownership, validation and performance monitoring. Increasing use of third-party providers of data, infrastructure and off-the-shelf AI solutions will further accentuate these AI governance and risk management challenges.
AI governance and risk management frameworks will also need to address the greater degree of interaction between risk classes. For example, in the context of AI applications which use personal data and have a direct impact on customers, conduct and data protection requirements will intersect significantly. In some areas, requirements will be aligned or complementary, including in relation to transparency and explainability. In other areas – where specific data protection requirements do not have a parallel in conduct regulation – regulatory implementation challenges may arise, for example when determining the appropriate lawful basis to the General Data Protection Regulation for processing personal data.
And while regulation sets the wider boundaries, firms will also need to understand and fulfil supervisory, customer, and social expectations of what is ethical, fair and acceptable. They will need to understand and address specific risks, potential harms and ethical considerations in the context of each AI use case. Strong ethical frameworks – built on a bedrock of regulatory compliance – will play an important role in helping firms identify and assess risks. They can also help organisations navigate possible trade-offs, for example between individuals’ privacy and AI accuracy.
Against this background, firms should adopt a comprehensive and integrated approach to regulation and ethics to ensure good customer outcomes, compliance, and operational efficiency.
The role of AI in FS will continue to grow despite the pandemic and – more significantly – because of it. But to make AI business models sustainable, firms must also invest in the skills and capabilities that business owners, control functions, and the board need to ask the right questions, interpret ethical and compliance requirements, understand model risks and put the right mitigants in place.
And ML and DS now also sit higher on the priority list for policymakers because of their increasing use and, alongside their benefits, their potential risks.