AI-driven technologies offer many new opportunities across private and public sectors. They can extract information from vast amounts of diverse data, identify patterns that cannot be captured by a human, and accurately and consistently process cases at a scale, drastically improving people’s lives.
Despite these benefits, AI also introduces potential risks, many of which have become headline news. Regulators investigated a new credit card accused of discriminating against women in setting credit limits. An app to detect kidney injuries was found to have violated UK privacy law. The Dutch court declared an AI system to detect welfare fraud breaches fundamental human rights.
In reaction to these risks, companies started publicising ethical values, often specific to AI. This approach focuses on how an AI solution should behave in order to be “good”, to enable those who are affected by it to trust in it. Frequently cited values include fairness, accountability and explainability. A key challenge, however, is operationalising these principles into the AI development lifecycle.
Organisations have additionally introduced design principles to embed these ethical values into AI products. Similarly to “privacy by design” approach in systems engineering, “ethics by design” starts in the initial stages of modelling projects. Applicable technical and non-technical requirements are identified and monitored throughout the AI model development lifecycle. The Deloitte white paper on embedding "human values in the loop" focuses on the principles of: impact (beneficence, non-maleficence), justice (procedural fairness, distributive fairness), and autonomy (comprehension, control).
One of the drawbacks of this approach is the difficulty in formalising a general set of “values by design,” as these may vary by domain area, by solution, and by stakeholders of AI. Even within one organisation or household, people can have different perspectives on what it means to be fair. Our previous blog post outlined the difficulties in defining fairness. Ethical values are subjective, and embedding them into an AI development lifecycle is not simple. As much as it may be convenient, fairness cannot be automated because laws and regulations are sensitive to the context, human intuition, and ethics.
In addition to a formalised set of values, an effective AI ethics governance requires a risk-based approach. The primary focus is the risk to fundamental human rights and freedoms that are widely accepted and enshrined in the EU Charter of Fundamental Rights and Freedoms including the right to privacy (Article 7, 8, 10, autonomy, sanctity of home, personal autonomy, communication secrecy), the right to equality (Article 21, equal treatment, prohibition of discrimination), and the freedom of expression (Article 11). For example, as part of the Data Protection Impact Assessment (DPIA) under data protection regulation, an organisation should identify any fundamental rights that are at stake in an AI product. The relevant regulations and laws, e.g. the UK Equality Act 2010, can then be identified for each right.
In order to conduct ethical risks assessment or build an AI solution in line with ethical considerations, the values and principles can be translated further into requirements and methodology for development and operation of data-driven solutions. At each stage of the modelling lifecycle, compliance with values/requirements should be checked and wherever possible tracked to the original foundation of the frameworks in use. For example, there should be an established and robust process for identifying any bias in the data, data collection process, and algorithmic design, and the impact of each bias should be investigated and minimised where possible. For a deeper analysis of the governance process for AI fairness through its development lifecycle, see this post on embedding AI governance in development lifecycle, published in the Berkeley Technology Law Journal.
No ethical risk can be reviewed in isolation; some risks may be at odds with one another or with an important business requirement. For example, in choosing between two AI models for credit risk evaluation, Model A may give more loans to racial minority groups than Model B, but Model A may have much lower precision in predicting loan default. Reduced precision implies potential harm to consumers through approving unaffordable loans, and at a market-level, this may result in a more unstable and risky credit market and thus lower levels of financial inclusion. Fairness should be considered only within the context of the broader implications for the customer. Further analysis of this trade-off can be found here.
There has been a proliferation of public and private initiatives that describe high-level principles and values to guide ethical AI. Over 65 frameworks have been collected in the AI Ethics Guidelines Global Inventory.
As more organisations strive to develop their own ethical AI frameworks, it is important to understand the limitations of ethical values and design principles in operationalising ethics into AI development lifecycle for a variety of products. Only a risk-based approach can help uncover the fundamental rights at stake, the relevant laws and regulations, and the governance necessary tailored to each individual use case. Most importantly, this will help identify when there are ethical values and business objectives at odds with one another and help provide actionable insights for business leaders.