There is an increasing consensus that unintended, unfair, and discriminatory biases in machine learning (ML) and artificial intelligence (AI) systems are difficult to define and detect in practice. Academics have introduced many mathematical definitions of bias, each representing different - and often contradictory - definitions of what it means to be fair. The challenge is that fairness and bias are context-specific: AI bias and fairness must be evaluated while being mindful of the ethical and practical considerations of each use case.

Legislators and regulators are increasingly concerned about the potential for AI/ML systems to perpetuate and exacerbate prejudice and inequalities, as shown by the proliferation of guidance, frameworks and rules on the subject. AI is already delivering tremendous commercial and scientific value, yet organizations recognize that the legal, regulatory and reputational exposure of unfairly biased AI is disproportionately high compared to traditional, people-driven, and less inherently scalable approaches.

As a first major step towards effectively managing AI risk, it is important to understand not only what types of unintended biases may exist in an AI system, but also why they may exist and through which AI development process they were introduced. Is the bias introduced due to an imbalanced data set, mis-labelling and embedding of human bias into the data, or an inappropriate feedback loop? There is no silver bullet for AI/ML unfair bias. A holistic mitigation strategy must combine technical changes to the data and model and non-technical changes to the people and process. However, first, the developer must be able to communicate the types of biases, why they exist, and what should be done about them to the appropriate business stakeholders and leaders.

In the new white paper, we introduce Model Guardian, a tool we've developed internally at Deloitte to walk a non-technical user step-by-step through bias identification, investigation into bias source, bias quantification, and bias communication and reporting. While defining fairness and bias cannot be automated due to the contextual nuances, the process by which we try to understand and tackle the issue can be formalized into a systematic tool. 


Download the Deloitte White paper “Striving for Fairness in AI Models” and learn more.

Please reach out to Michelle Seng Ah Lee (michellealee@deloitte.co.uk) for a demo of Model Guardian.