Automated Machine Learning (AutoML) presents a seismic shift in the way companies approach Machine Learning problems. Traditional approaches are typically very labour-intensive, require a large amount of time from specialists such as Data Scientists or domain experts and offer no guarantee of success. Automating the creation and testing of Machine Learning (ML) models has been adopted in many areas within the Financial Services sector and allows for vastly greater experimentation without the same time and resource costs. Banks, funds and any organisation interested in leveraging intelligent insight from their data need to carefully assess the opportunities and hazards associated with these tools.
Traditionally, the focus of AutoML tools has been automating model selection and hyperparameter optimisation, i.e. finding the model with the best performance on the dataset, saving you the drudgery of conducting your own, repeated experiments. One of the key benefits this offers is that it lowers the barrier to entry, effectively democratising Machine Learning. Someone with limited ML experience and a suitable dataset can automatically build a model with comparable performance to something that would require the time of an experienced Data Scientist. Leveraging the power of AutoML allows innovation and transformation in business areas that typically didn’t have the budget, resources or experience to do so.
If AutoML is capable of doing the work of an experienced Data Scientist it raises questions as to what role Data Scientists will serve. The benefit of AutoML is that they will be able to divert their attention towards more advanced pre-processing and modelling techniques. They can open up their area of research to a wider, more valuable domain or can instead look at improving processes in other, more complex areas of the business. Freeing up a Data Scientist’s time will allow them to add value in areas that previously had not been prioritised.
One of the key limitations of AutoML is the lack of advanced data pre-processing and feature engineering capabilities. A dataset is rarely in a clean format - ready to use, with all of the necessary information tidily presented. Typically, much of a Data Scientist’s time is spent preparing the dataset. AutoML tools have developed techniques to automatically handle simple data manipulation tasks, for example, most automated tools include strategies to simplify awkwardly distributed data and handle missing data or anomalies. Some of the more advanced tools have capabilities to pre-process common input types such as text input or image input. The real value is added when combining feature engineering with domain knowledge and expertise. Feature engineering is the process of creating new features and extracting valuable information from a dataset. Typically, feature engineering requires research across the dataset within the context of the subject matter, requiring logical thought and the input of subject matter experts. When done correctly, feature engineering can extract unbelievably rich insight allowing for dramatic improvements in model performance. As the adoption of ML spreads into more complicated and less structured domains, the importance of feature engineering is becoming increasingly prevalent. An example of such a case would be the application of Machine Learning to an internal company dataset that is tightly coupled with the company’s internal processes. Existing AutoML tools are unable to replicate the combined expertise of an experienced Data Scientist and a knowledgeable subject matter expert.
The input of a Data Scientist reaches further than just the manipulation of the data. If AutoML tools are used without full understanding or authority to challenge their outputs, the resulting product could potentially be a “black box model”. Focussing purely on the model’s performance (by optimising the model’s performance around a key metric) can detract attention from how exactly the model is using the data. When performance is not acceptable an AutoML user may simply give up or try a different AutoML tool. The more damaging consequences may occur when an AutoML algorithm creates a model that meets or exceeds the required performance. Focussing on a performance metric alone and ignoring the inner workings of a model can lead to overfitting on the dataset, an inability to explain changes in performance or can raise ethical questions towards any biased prediction made by the model. Machine Learning models are being given increasing levels of freedom in ever-more critical domains such as medical diagnoses, driverless vehicles and credit decisions. With an ever-increasing impact on the decisions made by a model’s prediction, it is paramount that we understand how the model has come to its decisions and that we are able to verify whether the features that influenced the decision were not only relevant but also ethical. If AI is to be adopted in wider society with higher impact use-cases, businesses need to ensure that they can trust the mechanics involved and can clearly explain how the decisions are made. This is something that cannot be achieved with ‘out of the box’ AutoML tools. Explainability tools such as LIME go part of the way to give us confidence that a model is well-informed and behaving appropriately but these methods are typically very sensitive, offer localised explanations and can give rise to counterintuitive results.
The greatest limitation of AutoML is that it cannot frame business requests into Machine Learning problems. In many cases, positioning the problem is the biggest challenge. Most business requests are wide-spread and multi-faceted, requiring problems to be framed appropriately in order to reap the benefits of Machine Learning. Even if we are to leverage AutoML the experience of a data scientist will be required to position the boundaries of the automated model development. The amount of data being stored by companies is growing exponentially and data scientists are spending more and more time trying to wrangle and compile this data into a usable and informative format. Data scientists will often joke that their job is 90% data preparation and 10% Data Science. While companies are more aware that they need to organise and store this data in an appropriate, consumable format it still requires expertise to make sense of the information, identify trends in the data and generate valuable insight.
Performance is a key measure of a Machine Learning model and AutoML tools are effective in achieving the required performance more quickly. For the wider adoption of AutoML tools, they need to evolve in such a way that explainability is baked into their very core. If we can automatically train, deploy and serve models that include clear explanations for their decisions, we will reach the level of trust required to deploy these models more confidently within highly regulated industries such as Financial Services. The ceiling that AutoML will seemingly never be able to break through is in human creativity. The ability to extract knowledge from an experienced individual and manipulate the dataset to tackle an intricate business problem is a quantum leap from today’s current standing.