As financial services (FS) companies are increasingly adopting solutions driven by artificial intelligence (AI), they face the new challenge of designing governance and controls specific to AI. We recently co-authored a paper on this topic in the Berkeley Technology Law Journal with Professor Luciano Floridi, the Director of Digital Ethics Lab at the Oxford Internet Institute and the Chair of the Data Ethics Group at the Alan Turing Institute. In this blog post, we aim to summarise our main takeaways.
The full paper is available here.
Adoption of AI is still in its infancy in the UK FS sector, despite the high potential impact in driving cost efficiencies and driving competitiveness. Some of the hesitation has derived from the growing regulatory scrutiny, with the Financial Conduct Authority (FCA) and the Information Commissioner's Office (ICO) actively issuing opinions on AI and machine learning (ML).
This paper uses the risk of discrimination as an example to discuss the practical FS challenges of managing risks introduced by AI. We walk through an AI product lifecycle and reveal the process by which risks can be identified, assessed, controlled, and monitored in an FS company and derive recommended practices and principles from past publications by regulators.
Fairness in the financial services industry
Machine learning is increasingly being used to make or aid decisions that are consequential to FS customers, from evaluating their credit worthiness to recommending investment products to pricing their insurance premiums. It also impacts employees, with CV screening algorithms and performance tracking measures.
For a discussion on the challenges of fairness, see Michelle Seng Ah Lee’s previous blog post.
Managing risks of AI through its lifecycle
Academic research has focused on model and algorithmic risks, such as bias and accuracy, in isolation. In reality, model design and performance must also consider non-model risk domains, such as: regulatory and compliance risk, technology risk, people risk, supplier risk, conduct risk, and market risk.
The adoption of AI does not require an overhaul of the existing enterprise Risk Management Framework (RMF), but rather an awareness of how AI may complicate the detection of risks as they manifest themselves in unfamiliar ways. The volume and speed of data processed may require a much faster reaction speed for any errors, and the complexity of a machine learning algorithm may hinder its explainability and auditability.
Supervisors will expect firms to have robust and effective governance in place, including RMF, to identify, reduce, and control any of the risks associated with the development and ongoing use of each AI application across the business.
Our paper proposes the process for each stage in the AI product development lifecycle, from Design to Build to Productionise to Monitor. Each process in the figure below corresponds to the chapter and section in the full paper.
Closing remarks
While adoption rates have been slow, AI will increasingly become an integral component of FS firms’ strategies to achieve operational efficiency, improve customer service, and gain insights for competitive advantage. It is imperative that organisations understand the implications of this adoption from a risk perspective, such that appropriate governance and controls are put in place to mitigate the new and exacerbated risks.
The referenced paper in this blog explores the practical implications of risk management throughout an AI solution’s product lifecycle. With a particular focus on the UK and the EU, suggested approaches were coupled with regulatory principles and precedents. The primary highlighted example use case is the risk of discrimination against protected classes. While there has been a wide array of studies on the technical and theoretical definitions of fairness, further work is required to devise a framework to determine which definitions are most appropriate in the practical implementation of fairness metrics in FS industry.
In summary, Risks of AI are not confined to the algorithm itself, but rather affect the entire organization. AI-specific considerations should be integrated into existing RMFs to ensure they remain fit for purpose. Only then will FS firms feel empowered to use AI, having the confidence that AI-related risks can be effectively identified and managed.