- The EU has published the Artificial Intelligence (AI) Act, its proposal for a comprehensive legislative framework for AI. The Act will have an extraterritorial impact on AI providers and users in non-EU jurisdictions.
- Some AI systems used in Financial Services (FS) are in scope and deemed high-risk – e.g. those used to evaluate a person's creditworthiness, monitor and evaluate work performance and behaviour, or recruit staff.
- Providers and users of AI high-risk systems will have to comply with stringent rules before and after the marketing or use of AI systems. They will also be subject to conformity assessments, registration requirements, and hefty fines for non-compliance.
- We expect the EU to finalise the rules by 2023/2024. A 'wait and see' approach is not viable for firms, given the breadth and complexity of the proposed requirements.
- Firms should assess which of their AI systems are likely to be high-risk and conduct a high-level gap analysis against the Act's essential requirements. By doing so, they will gain an understanding of the implementation efforts required and the impact on their AI strategies.
April was a watershed moment for AI regulation. The European Union (EU) Commission published its AI Act, an ambitious proposal for a comprehensive legislative framework for AI - the first from a major global economy. In the words of Margrethe Vestager, EU Commission Executive Vice President and Technology Commissioner, the aim is ‘for Europe to become a global leader in trustworthy AI’.
The Act will apply to organisations providing or using AI systems in the EU. But it will also apply to providers and users located in other countries, including the UK, if their AI systems affect individuals in the EU. The extraterritorial impact is one reason why the AI Act has already been likened to the General Data Protection Regulation (GDPR). Its cross-sector remit and hefty fines - up to €30 million or 6% of an organisation's global turnover - are also reminiscent of GDPR.
However, the scope of the AI Act is much more comprehensive. It applies to all AI systems, not only those using personal data. It defines AI very broadly and addresses a much wider set of harms to individuals and society than those resulting from the misuse of personal data.
AI systems that can restrict an individual's financial and professional opportunities are deemed high-risk, for example, and subject to strict requirements. This puts financial services (FS) use cases fully into scope, such as AI systems used to assess creditworthiness or monitor employees’ performance and compliance.
The publication of the AI Act kicks off the legislative negotiations between the European Parliament and the Member States, which are likely to be complex and lengthy. We don't expect finalised rules before 2023, at the earliest. But what we do now have is certainty that AI regulation is coming, and clarity about the proposed approach. As this article will discuss, it would be unwise for firms to adopt a 'wait and see' approach.
Strict requirements for high-risk AI systems
The AI Act classifies AI applications based on their potential impact on our lives as individuals and society. The Commission proposes to ban a limited number of AI applications altogether, considering them unacceptable and against the EU's fundamental values. AI systems designed to manipulate human behaviours, social scoring, or real-time remote biometric identification by law enforcement are prohibited, although some narrow exemptions apply.
But the bulk of the Act focuses on 21 high-risk AI systems, including applications used in areas of employment and access to essential private services. This is where the relevance of the AI Act to FS becomes very clear. Three high-risk AI applications which are likely to be of immediate relevance to FS firms’ innovation strategies are those used to:
- evaluate a person's creditworthiness or credit score;
- monitor and evaluate work performance and behaviour, e.g., employee monitoring for compliance purposes or algorithmic management; and
- recruit staff, e.g. advertising vacancies, screening applications, or evaluating candidates in interviews or tests.
These systems are permitted, but firms must comply with strict rules. These include detailed obligations around risk management, data quality, technical documentation, human oversight, transparency, robustness, accuracy and security.
Providers of high-risk AI systems must complete a conformity assessment against all applicable requirements before the AI systems are put on the market or used. Providers must then register the AI system, including the declaration of conformity, in the public EU database on high-risk AI systems, which the Commission will set up. They must also have an ongoing monitoring system to address any risks arising after putting the AI system on the market or into use.
Requirements for users of AI systems include using the AI systems according to the provider’s instructions, safeguarding human oversight, ensuring the relevance of the input data, reporting serious incidents to the AI providers, and keeping logs of AI system’s activities.
Banking regulators will be responsible for supervising AI applications provided or used by credit institutions regulated under the EU Capital Requirements Directive (CRD). The aim is to ensure the consistent enforcement of similar obligations across the AI Act and CRD in areas such as governance and risk management. The AI Act also includes limited derogations for credit institutions to avoid overlaps, e.g. concerning quality management systems and monitoring obligations of high-risk AI systems.
The inclusion of AI systems that assess creditworthiness in the list of high-risk use cases is controversial. Tobias Tenner, Head of Digitalisation at the Association of German Banks, said that ‘The use of AI systems by banks for creditworthiness assessment is already subject to a strict supervisory regime. [...] It is therefore neither appropriate nor necessary to use regulation on AI to introduce additional requirements for lending.’
However, emerging evidence shows that the growing use of alternative and personal data in algorithmic credit scoring could adversely affect individuals’ privacy and autonomy. The existing FS regulatory framework and GDPR do not address these issues fully, nor other areas of focus of the Act, such as transparency across the AI value chain.
Some FS firms may contend that more targeted amendments to FS or data protection regulation could help address some of these gaps. However, this would require the EU to forsake the objective and benefits of a single harmonised AI legislative framework.
Either way, we expect the question of whether to remove credit scoring AI systems to be hotly debated during the negotiations. Some EU policymakers may prefer a shorter list of high-risk AI applications to make the AI Act more innovation friendly. Others are likely to favour more protection for individuals, including by adding other use cases which they regard as high-risk to the list. Examples could include AI systems used to calculate insurance premiums for compulsory protection products, such as home or car insurance.
It is also worth noting that, once the AI Act enters into force, the Commission will review and amend the list of high-risk AI systems on an ongoing basis.
The impact of a (too?) broad definition of AI
Defining what AI is was always going to be a challenge. Yet, the Commission's definition is extremely broad by any standard. It seems to include nearly any software system that can make decisions, or generate outputs to support or influence decision-making, for a given set of objectives. The Commission lists several software-developing techniques in scope. These include machine learning, but also statistical approaches, Bayesian estimation, and search and optimisation methods.
This definition is likely to capture decision-making systems in place for several decades, e.g. some standard credit scoring models. This could be a deliberate choice. Starting from a broad definition could allow a thorough discussion about what is and what is not AI.
Yet, we expect a heated debate on whether the definition should be narrower, or whether the Act should include carve-outs so that well-established systems are not considered high-risk by default. This could be reasonable, provided stakeholders can evidence that the harms arising from such systems are neither severe nor probable and that the regulatory impact of automatically considering them high-risk would be disproportionate.
The devil will be in the details
The EU will establish a new AI Board to oversee the implementation of the AI Act. Its tasks will include issuing recommendations and opinions to the Commission on the lists of prohibited and high-risk AI systems.
But it will be up to the individual Member States to designate one or more national competent authorities (NCAs) to enforce the regulation and set out rules on penalties. As others have observed, the EU may need to tighten some of the language in the Act if it is to ensure consistency in oversight and enforcement across the Member States.
For example, one of the data quality requirements is that ‘Training, validation and testing data sets shall be relevant, representative, free of errors and complete.’. Both firms and NCAs will need a lot more guidance and detail to assess whether a dataset meets such a high‑level description. Also, expecting data sets to be completely ‘free of errors’ may not be a feasible approach.
The Commission is keen to ensure the final AI Act strikes the right balance between reducing harms and promoting responsible innovation. It is important that all stakeholders flag areas of uncertainty and unintended consequences to support EU policymakers in achieving this objective.
Conclusion
The question of how to regulate AI is rising quickly to the top of the global policy agenda. The United States welcomed the Act and confirmed it would work with the EU to foster trustworthy AI. The development of ethical and responsible AI will also be a key area of focus of the UK’s national AI strategy, which the Government plans to publish later this year. But the AI Act makes it clear that the EU intends to lead the shaping of international norms and standards.
The Act is likely to take two or three years to negotiate. After it enters into force, firms will have two years to comply with its requirements. It seems a long time. But our experience with GDPR suggests that those who waited for the ink to dry on the regulation struggled to be ready in time, under similar implementation timelines.
The measures proposed in the AI Act will affect firms and their AI innovation strategies in the EU and worldwide. Firms should start by assessing which of their current and planned AI systems are likely to fall into the AI definition of the Act and, of those, which are high-risk. They should conduct a high-level gap analysis against the Act's essential requirements to estimate the potential impact. Firms can also use this information to engage with policymakers to help the EU achieve its objective of reducing harms and promoting responsible innovation and to understand the scale of the effort required to implement the Act in due course.
For further insights, take a look at our recent paper on Building Trustworthy AI, which explores the key areas of interaction between conduct, data protection and ethics in the context of AI applications which have a direct impact on customer outcomes.