Our article A principled approach: Model Risk Management in the PRA’s spotlight highlighted the principles laid out in the PRA’s CP6/22 consultative paper on model risk management. The PRA’s approach to Model Risk Management is centred on their definition of a “model”. The definition is used to frame the discussion of the scope and principles defined in CP6/22, as well as their implementation into an organisation’s risk management framework.
Few would disagree that valuation models or capital models qualify as models (with apologies for the tautology, the clue is in the name!) as such models are based on quantitative methods and techniques in common use.
In critical and core functions such as electronic execution and credit analysis, automation and digitalisation appear set to continue for the foreseeable future. There exist algorithms, estimators, heuristics and numerical approximations that have historically also been used to support or drive decision making. Such structures, approaches and techniques have not historically been captured by firms’ definitions of a model, neither have they been registered in the model inventory, nor subject to model risk oversight.
CP6/22 seeks to address such gaps, raising their importance, by providing some clarity on the definition of a model as a: “quantitative method that applies statistical, economic, financial, or mathematical theories, techniques, and assumptions to process input data into output”.
In this blog post, we explore some categories of quantitative methods that involve a theory or assumption, but for many firms may not currently be satisfactorily captured within the scope of existing model risk policies.
In common with other regulators, in CP6/22 the PRA is bringing further clarity to an area of quantitative finance that from a practitioner’s perspective, has to-date been awash with ambiguity and imprecision. Across the world's various regulatory texts on model risk, references to “model” greatly outnumber references to “algorithm”. For many in finance this is quite understandable, but the dominance of algorithmically driven electronic trading, as well as the mainstream adoption of artificial intelligence and machine learning is beginning to point to the need for a clearer distinction (and perhaps more nuanced treatment) between algorithms and models.
The classical way of thinking about a model is as the result of an iterative process involving hypothesis, mathematical reasoning, algorithmic development and testing (using data). In contrast, machine learning provides an alternative perspective, namely, that the distinction between an algorithm and a model is that the former is a general approach whilst the latter is the result of running one or more algorithms over data in order to provide predictive capabilities in the form of a model. In contrast, the machine learning approach typically generates a functional approximation that (conditional on the parameters and calibration of the algorithm, and prior(s)) is assumed to be the optimal mapping of input to output. New models can be generated with the same algorithm using different data, whilst it is also possible to generate a new model from the same data but with a different algorithm. Models can therefore be thought of as a mathematical description of systems, whereas algorithms are a set of actions or rules required to implement a hypothesised vision or belief.
Do algorithms meet the proposed definition of a model? Application of a technique or theory would imply that algorithms meet the proposed definition of model.
An estimator is a rule or procedure for inferring the value of some hitherto unknown quantity of interest – typically the parameter(s) of a probability model. The definition is broad and would admit just about any rule that returns an estimate. As an example, if we were to ask an A-Level maths class to estimate the number of taxis in a city, any proposed approach would be admissible irrespective how far from the true value it might return or overly time consuming it might be to execute.
However, some estimators are better than others. Estimation theory provides a framework for defining the desirable properties of an estimator, and indeed deriving the best possible estimator that meets those properties. Examples of estimators in widespread use include Maximum Likelihood Estimation (MLE), Maximum A Posteriori (MAP) estimation and the method of moments (moment-matching).
Do estimators meet the definition of model? Application of estimation theory (and its underlying axioms as assumptions) would imply that estimators meet the proposed definition of model.
A heuristic is an approach to problem-solving that employs a strategy of trading accuracy for pragmatism – typically accuracy is sacrificed for execution speed. Heuristics needn’t be grounded in theory, or the theoretical basis may be incomplete. However, in general, heuristics are regarded as rules-of-thumb or shortcuts. Nevertheless, they are often the only viable option for certain categories of problem for which a viable algorithm (e.g. one that would complete in finite time) may not exist.
Potentially the most ubiquitous heuristic in credit model development is stepwise selection. Whilst widely accepted, the stepwise procedure does not guarantee a complete traverse of the search space. Nor does it guarantee to return the global maximum goodness-of-fit.
Do such heuristics therefore meet the proposed definition of model? Application of a technique (and indeed the assumption that the heuristic delivers a suitable trade-off between accuracy and pragmatism) would imply that heuristics meet the proposed definition of model.
The use of algorithms that use numerical approximations (as opposed to symbolic manipulations) forms the foundation of most (if not all) problem solving that is performed by computers. Well-known mathematical examples include Newton’s Method for finding the root of an equation and Euler’s Method for solving differential equations. One common feature of such numerical approaches is the need to discretise by sampling from continuous problems and data, thereby trading off lower precision in favour of reduced storage and improved computational efficiency.
A ubiquitous example of discretisation in the real world is floating point arithmetic. Computers approximate real numbers using floating-point or fixed-point arithmetic to achieve a required level of precision by sacrificing the range over which numbers can be expressed.
Until the mid 1980s different computers handled the trade-off between range and precision differently. For example, most computers could return 0 from X – Y, even though X and Y were different. Exception handling – e.g. how the machine would behave, if it encountered a number too close to zero to represent (underflow condition) – also varied.
The advent of the IEEE 754 standard brought consistency in the implementation of rounding, arithmetic and exception handling, to the extent that software developers can generally assume that computational results will be consistent between hardware and operating systems that execute their code.
Crucially, however, the standard does not guarantee that individual operations or entire software applications will return the “right” result, or even identically the same result from identical initial conditions. Rounding errors can easily accumulate in software that repeats even the most basic operations such as addition and subtraction. Real-world examples of floating-point error impacting real-world results include:
An index describing securities prices on the Vancouver Stock Exchange truncated its value at 3 decimal places, resulting in the reported value being out by a factor of around 2 after less than 2 years.
A missile system measured time in terms of the number of 0.1s periods since it was powered up, by multiplying the number of clock ticks since start-up by the floating-point approximation of 0.1. After 100 hours of operation the accumulated rounding error was 0.34s – more than enough for the system to become ineffective.
The guidance system failure in an unmanned Ariane-5 rocket attempted to convert a 64-bit floating-point number to a 16-bit signed integer. Since the number was greater than 32,768, the conversion failed, resulting in loss of guidance and shortly thereafter the loss of the rocket and its cargo, valued at more than $500m.
Does numerical analysis meet the definition of a model, or does it fall more in the category of an algorithm? Application of theory and techniques (and indeed the assumption that a solution is a sufficiently accurate approximation of the true value) would suggest that numerical analysis meets the proposed definition of model.
One More Thing…
CP6/22 is a welcome step towards a principles-driven approach to model risk, but it is not a universal panacea. Principle-based regulation encourages the right behaviours and discourages “tick-boxing”, from the board down to practitioners in core risk-taking functions. Reality nevertheless abounds with exceptions that create practical difficulties if the industry is to manage model risk consistently.
Appealing to Model Theory offers only the broader guidance that a model is a set of tools for understanding complex structures (both abstract and real). The inevitable consequence of a more encompassing scope for model risk is therefore a greater burden for model risk management teams (who are frequently already stretched) as automation of front and back-office functions, together with AI, block chain and crypto-currencies continues to accelerate.
As banks start to mobilise and implement the CP6/22 principles into their risk management frameworks, there remain questions for banks and supervisors to align on the answers. Arguably near the top of the list is the question of whether algorithms, estimators, heuristics and numerical analysis are to be treated in exactly the same way as traditional models?
CP6/22 leaves the door open for banks to be proportionate in their MRM implementations. For banks to form a view on what is proportionate, however, is likely to require an iterative discovery exercise that extends far into the “tail” of currently-anticipated non-models, before the detailed definitional nuances can be fully identified and implemented into a practical framework. It may seem like common sense to one organisation’s board, not to include every numerical method used in the organisation within the model definition. But the next organisation may have had a disaster or near-miss associated with using a bad algorithm, the wrong estimator, a heuristic short-cut too far, or floating-point truncation.
If the answer is in the affirmative, then CP6/22 presents many complex challenges to the ever-expanding task of managing model risk. It is a fascinating time to be working in the domain of model risk management.