Imagine you were able to confidently identify which complaints, cases or transactions represented the highest risks to your firm. Now imagine you could go a step further and understand the root cause behind particular events. This blog will provide real world insight on how the use of machine learning within operations has benefited our clients both from a risk and financial perspective. Two specific challenges include optimising the triaging process and distinguishing causality from association.
If you’re like one of our recent clients, you may find these challenges familiar:
- Due to an increase in case volumes, the organisation was struggling to identify which cases were susceptible to fail compliance and employed a reactive approach to risk management.
- For those cases that were identified correctly, there was a manual process to separate cause and effect.
- As a result, compliance and assurance costs were rising significantly and threatening to become unsustainable. Given this workload, the compliance team were unable to focus on value add activities.
We helped overcome these challenges with a four-pronged approach:
- Architecture: You first need to understand the current data architecture to ascertain the key data sources driving compliance and quality monitoring. Building the ‘data’ picture can help increase the speed of decision making and improve the overall accuracy of the model(s).
- Data Governance: At a high level, machine learning requires data that is complete, accurate and auditable. We analysed how the prevailing deterministic approach used to sample cases for spot checks including data quality and how these rules were applied.
- Machine Learning & Population Selection: Defining the correct population sample is critical in predicting the outcome. Once a suitable model is identified, developed and validated, it is run in parallel to the existing process. This allows a comparison between the two approaches and ratification that the machine learning model is able to predict accurately into the future.
- Operating Model: The final step is to make the necessary adjustments to the operating model to use the predictions to reduce true negatives and increase true positives.
The Results:
- Up-to 50% uplift in failure detection
- Provide the causal factors to quality control failure
- Streamline OPEX by 15%
Context:
Substantial resources are applied by financial institutions to operate Quality Control and Assurance processes to reduce the risk of cases and transactions that are detrimental both to the customer and the client prior to becoming an issue. However, the ability to spot the problematic transactions is difficult and many institutions have simplistic and deterministic approaches to selecting transactions for review. Over the course of the COVID-19 pandemic, there has been significant uncertainty surrounding customer interactions, capacity planning and operational risk. It has been interesting to see from our clients that new queries and complaints had progressively reduced from March through to August. However, volumes of new interactions have started to accelerate and this is where triaging of these cases has become key to managing capacity and dealing with the most vulnerable. So to provide the necessary confidence, banks test significantly more than is needed which leads to unplanned operational costs.
To target those cases or transactions that are higher risk and reduce costs, firms need to solve two main problems:
- Incorporate machine learning across each process or area where a decision is required to provide an outcome to the customer.
- Understand how to separate causation from association.
Quality Control:
Traditional Quality Control and Assurance methods struggle to consume large and complex datasets together with complex business rules and filtering. There are four key steps in preparing to move from a deterministic to a probabilistic approach:
- Architecture: Often, operations can feel disconnected on the selection of data that is consumed and this can lead to limitations on the types of analytical techniques used. Many operational datasets are provided infrequently and often without the holistic view of the case or complaint and this can happen through limited or lack of integration to core banking systems. Organisations should define and move toward a micro-services approach allowing data to be added with minimal impact to other operational processes. A key bi-product of having an established data architecture is the increased frequency and breadth of data available to operations management and support teams. Machine learning works best when supplemented with data that is available frequently and in a format that is digestible to reduce latency. Clients are now building alliances with third party ecosystems to undertake a number of capabilities. In this context, transactions can be scored and monitored via a SaaS (Software as a Service) to reduce the risk on the client side and enable clients to focus on reducing the detriment.
- Data Governance: In order for machine learning models to provide accurate predictions, data must be complete, within tolerance and measured by the data governance function for accuracy. Classification of existing failure and non-compliance must also be logged to provide the data scientist a ‘target’ to model on.
- Machine Learning & Population Selection: Transitioning away from manual rules to machine learning can sound quite daunting. However, with the right governance, testing, personnel and technology in place it is easier than it sounds. Below are some of the key principles Deloitte can help to get you up and running so you don’t have to:
- Flexibility and adaptability are key when developing machine learning models which is why open source packages are great to work with. There is minimal cost to the client, access to a number of algorithms and ease of access to vast training programmes for those looking to enter the world of artificial intelligence.
- Identifying the correct population to train the models is fundamental in achieving the desired outcome which in this case, is to predict non-compliance or failure before it happens. This is due to two key reasons. Firstly, the sample used to train the model must be representative of the population so that there is a high degree of confidence once the predictions are created. The second reason is to reduce the number of ‘true negatives’ and ‘false positives’ and by doing so, the correct cases are worked and any potential risk is caught earlier in the process.
- Explainability: Audit & Assurance require machine learning models to be explainable, auditable and ethical. For this reason the Deloitte Risk Analytics uses hypothesis testing and partial impact analysis (i.e., RESET, LIME, SHAP, among others), to enable the developer to monitor the reason why decisions are made by the machine.
- Operating Model: Many machine learning engagements fail at this step where, although the predictions may meet the required accuracy, the necessary changes required across the operating model have not been fully thought through. Factors include: workflow management, integration of the output from the machine learning models, model performance testing to reduce the risk of model degradation and depending on the use case, what are the process or policy changes to improve the customer outcome. The most significant reason for failure, however, is resistance by employees in moving from a settled process that may have been place for a long period of time to a completely different way of working. By highlighting the operational benefit and driving machine learning as an enabler we can considerably increase the chances of machine learning implementation. Machine learning models can be run in parallel to existing processes and a ‘live proving’ phase can ascertain the effectiveness of the model in a live environment.
Identifying causation from association:
Quality Control and Assurance processes are traditionally based on deterministic approaches based on a number of risk rules applied to cases and transactions. These frameworks incorporate several layers of complexity to identify non-compliance. However, the focus is generally applied to a couple of factors in isolation and can lead to uncertainty on how other characteristics have contributed to non-compliance. This is the significant deficiency of deterministic approaches where there are rule based sampling that is frequently out of sync to the changes in risk profile and this leads to complaints and transactions going unnoticed. Compliance teams often resort to ‘over’ sampling to counter this challenge which often leads a significant increase in false positives and is further exacerbated further by limitations in architecture and technology.
The first step in improving an existing assurance process is to identify what is causing non-compliance as opposed to what is merely associated with it.
Approach
Each determinant of non-compliance often displays some degree of correlation with the final outcome under analysis. However, this does not mean that a factor is causing non-compliance. Instead, the correlation only establishes a linear relationship, which means that both events occur in conjunction. In order to establish a causal relationship, one needs to remove the bias that relates to events that merely happen in association.
The Detail
A key priority across the operational landscape is to understand the underlying reasons behind sub-optimal compliance quality. The ability to identify the key determinants of non-compliance involves sophisticated statistical techniques that are able to separate association from causation. There are a few reasons why an impact estimate would not be causal. The most common ones are:
- “Reverse causality”- for example, when a new regulation is introduced and quality deteriorates, it is likely that quality control failures rise due to the increased awareness and subsequently increased volumes. However, this does not mean that the new regulation is causing the failures.
- “Omitted variable bias” – If an important determinant of non-compliance is omitted from the model this will lead to biased results.
- Developing machine learning models from scratch requires time not just to understand which type of statistical model would best fit the data distribution, but just as important, which ‘features’ are prevalent in identifying non-compliance.
- The 1LoD will have a great understanding of the wider implications of what can impact the customer outcome. There is often a disconnection between case handlers and senior management due to the complexities of how cases / transactions flow through operational processes. If these features are catalogued, it can play a key part in model development.
- Once the business understanding of the operational process has been ascertained, model specification is paramount. There are statistical considerations and thorough hypothesis testing as part of the model development journey.
Example
We will give a real world example from within the complaints domain. Risk based rules are often applied on transactions based on a number of key factors: product complexity, case handler competency and historic customer interactions. However, these rules are often applied in isolation.
For example, within the complaints environment complex products such as mortgages and secured loans are generally deemed to be higher risk due to their complexity and generally customers have these products over a much longer duration spanning several years. This would have been true earlier on during the inception phase of the Quality Control framework but due to changes in policy and behaviour, this risk scenario would be mitigated by including additional compliance checks and increased training for such products. The ability to spot emerging risks is where traditional methods underlined above struggle to mitigate.
By leveraging machine learning in this context, we found that products that are simpler in nature such as a single loan or multiple credit cards end up being rushed due to the apparent simplicity of the case, heightened productivity targets and this ‘speed’ causes a number of failure points. Combining this with how handlers behave, factoring in competency, shift pattern and training requirements can build a dynamic set of rules tailored for sampling purposes.
We have seen working in practice by implementing machine learning increased the detection of complaints that would fail compliance by 40%, but also reduce operational costs by 15%.
Conclusion:
There is understandable apprehension on transitioning away from assurance processes that have been in place for a number of years. But with an ever-changing environment and volatility surrounding COVID-19, now is the time to consider new advanced techniques. By implementing machine learning to support Quality Control and Assurance, operations teams can become significantly more effective, whilst reducing effort on cases that are unlikely to fail compliance. Better screening and monitoring enables your teams to focus on the cases that are truly challenging, complex and risky.