Financial services firms and cross-sector authorities across Europe are starting to look closely into the governance and model risk management challenges arising from material AI models, and the implications for their regulatory and supervisory objectives.

The use of AI in financial services is still relatively young, but it has the potential to make firms more competitive, efficient, and profitable. Regulatory initiatives such as Open Banking and, in the future, Open Finance will continue to incentivise the development of advanced data analytics capabilities to generate significant value for both firms and customers.

Against this background, financial services firms are proceeding steadily, if cautiously, in their adoption of AI.

Regulators are keen to see the potential benefits of AI being captured both by the industry and themselves. However, as adoption of AI increases in scale and strategic importance, its risk implications are also rising steadily up regulators’ agendas. 

The current regulatory framework does not preclude the use of AI models. However complying with governance and risk management requirements will be generally more challenging in an AI environment, for example in relation to model interpretability, stability, and performance. AI model risk will also interact to a much greater degree than before with other risk classes, such as conduct or data protection. Firms will therefore need to demonstrate that their model risk management frameworks have been enhanced to be able to identify and manage a much broader set of risks, such as bias, discrimination, privacy, and broader data ethics implications. 

Some regulators, such as the DNB and the UK ICO, have started to issue draft AI guidance and frameworks. We expect this trend to strengthen in 2020, both at EU and national level. For example, the EIOPA InsurTech taskforce is currently assessing how AI differs from other commonly used insurance models, and will consider whether specific governance requirements are required. In the UK, the FCA will publish a report, in partnership with the ATI, on how the financial sector can explain, and be transparent in, its use of AI.

These initiatives will help firms apply existing rules to their AI models. However, they are also designed to leave no doubt that supervisory expectations, whilst remaining proportional to the risks involved, will be unaffected by firms’ use of AI per se, and that using AI will not dilute firms’ corporate governance and individual accountability obligations. 

The latter will be a particular area of supervisory focus. Board members and senior management will need to demonstrate the necessary capabilities to consider, challenge, and manage the key strengths, limitations, trade-offs, and appropriateness of AI models. There will be strong expectations on boards to establish clear risk appetite frameworks and parameters within which AI systems can operate, and to satisfy themselves that effective controls are in place to ensure that neither is breached. This will be especially relevant for significant or material models, such as those used for risk and regulatory capital calculations, or to drive consumer outcomes.

To support the board in discharging its obligations, model validation teams, as well as other critical oversight functions across all three lines of defence, will need the necessary skills and technical understanding of AI. 

This will take time. In 2020, we expect adoption of AI models in financial services to continue to grow, but at a cautious pace. Firms will tend to adopt AI models for lower risk activities, for example to validate or improve existing models, but not, for example, to make fully automated significant decisions about customers.

While the onus to demonstrate compliance of their AI models will remain squarely on firms, supervisors too will continue to build their AI supervisory skills and capability, and to consider what other initiatives could support a wider, yet safe, adoption of AI. But we do not expect any immediate significant changes in regulatory or supervisory use of AI for the  time being.

As supervisors establish their expectations for AI models, they may apply some of these to traditional models, where the same characteristics or shortcomings (e.g. opacity) may also exist, but have hitherto been overlooked or underestimated. Supervisors will expect firms to be proactive in considering and responding to any such deficiencies and, where relevant, apply enhanced governance and model risk management practices developed for AI models to traditional models as well.

This article is part of the Deloitte's EMEA Centre for Regulatory Strategy Financial Markets Regulatory Outlook 2020