I’m a consultant in the capital markets industry with a recreational interest in philosophy; I'm enthusiastic about both of these topics but they don’t normally intersect very much! However at this point there is reason to think that the financial markets and the philosophy of mind are becoming relevant to each other: the reason for this is the increasing importance of AI in our industry.
These have not only brought a new language, conceptual, and physical tools to engage with old ideas about human minds – they have created an imperative to do this in order responsibly to deploy these new technologies. There is, rightly, great attention on how to make AIs ethical as well as effective: understanding what they are, and are not, is a pre-requisite to getting this right.
Contemporary AI (sometimes called second-generation AI) employs machine learning approaches that don’t rely on pre-specified models to perform tasks (as first-generation AI did), but learn by iterating through big data sets to identify relationships that would never be directly identified by a human (e.g. because they may involve individually weak correlations between thousands of variables) and use these relationships to recognize faces, or speech, or read x-rays, etc. With the addition of a utility function enabling them to ‘score’ various situations the system can identify and follow paths to achieve an outcome (for example winning a game of Go).
These approaches have been incredibly successful; unlike first-generation AIs which were laborious to create and typically useful only for a specific purpose, second-generation approaches enable flexible systems to be trained by end users (democratizing AI), and to outstrip human capabilities in many ways. This success has given us new and powerful tools (though that the ‘newness’ is often attributable to raw compute – many techniques have existed for a long time but become effective with huge data sets / processing speed). However it has also led to excited opinions that creating a true Artificial General Intelligence is tantalisingly close.
I think that there is good reason to believe that this is *not* the case – and that this can tell us something about minds themselves, and about the things for which AI is likely to be useful in the near term:
- While it is hugely powerful, and on course radically to transform industries and society, I do not believe that contemporary AI is creating systems that are particularly similar to minds. I am not a mystic about this; it would be possible in principle to make such a device (a human being is one!) – but there is no reason to believe that our computers bear much relation to this; we actually have no idea how to start constructing such a thing.
- In some ways this is obvious – computers are good at things we are comparatively bad at, e.g. arithmetic, but can’t do things that babies can, e.g. spontaneously laugh. And the way that minds relate to the world is very different, involving forms of commitment and accountability (caring!) which we have no idea how to implement in a computer (the utility function referred to above is typically ‘put in by hand’).
- This difference means that there are important things that no currently contemplated technology can do.
For a clear statement of why this is, a good reference point is a recent book by Brian Cantwell Smith (Professor of Computer Science and Philosophy at the University of Toronto) , called The Promise of Artificial Intelligence: Reckoning and Judgement (2019).
Roughly, Cantwell Smith divides capabilities into 2 categories – reckoning and judgement. Crudely (if interested, read his book, and errors here are mine not his) – computers can reckon, but cannot judge. I think that his division is an instructive way to think about how AI and humans can work together in the financial markets:
- Reckoning roughly in the sense of performing a calculation; what computers are brilliant at
- Anything that can be reduced to arithmetic or consists in rigidly following rules
- in a markets business:
- activities like pricing, valuation, risk calculations, order execution, surveillance and monitoring (up to a point);
- basically the whole trade flow in vanilla and linear products, and more recently some options products, in normal market size;
- providing analytics and tooling to support human decision makers;
- advisory services based on formulaic approaches, for example what is the optimum strategy to achieve xyz – where the objective is given and the inputs to the problem are known / quantifiable.
- Judgement roughly in the sense of ‘x displays good judgement’; what humans are brilliant at
- Responding to new situations, setting goals, explaining why they did something (although of course they can be deluded, or lie!), emotional commitment and empathy
- In a markets business:
- oversight and responsibility, setting strategy and direction, communicating with clients, supervisors, and other stakeholders;
- considering the impact of factors or events not present in historical data;
- advisory services where the objective needs to be determined, and/or where the inputs to the problem are unknown or hard/impossible to quantify;
- Taking responsibility
Ensuring that the our AIs perform reckoning tasks ethically is a major area of focus (otherwise, e.g., they may reflect inappropriate factors – and while gross examples of this may be easily noticed, others could be non-obvious). However a logically and practically prior requirement is to employ AIs in ways that recognise that they are not (yet) moral agents *at all*, and that respect the qualitative, not merely quantitative, constraints on what they can do and how they can interoperate with humans effectively.
Working out the best ways to combine the distinctive capabilities of artificial and human intelligences is a great challenge of our age; those who get it right will be positioned to thrive. And perhaps surprisingly, figuring out ways most effectively to deploy our new tools will demand deep reflection on ourselves and the distinctive nature of human cognition and reasoning too.