INSIGHTS Opinion
By Julian Gaberle, 06/05/2021
Artificial Intelligence (“AI”) systems have seen widespread adoption – from online retailer’s recommendation systems, to smart navigation and gaming. AI, however, increasingly finds applications in less transparent areas such as defence and surveillance, or finance and credit scoring. While these applications can produce accurate results, they are often highly complex. This complexity has led researchers and policymakers to question: is it possible to understand how AI works, or is AI a ‘black box’?
On April 21st, the European Commission published the ‘Proposal for Regulation laying down harmonised rules on artificial intelligence’, or the ‘Artificial Intelligence Act’ (AIA). The proposal aims to build a regulatory framework to allow AI innovation while mitigating potentially high risks associated with AI applications (see our previous blog post on AI regulation). A key theme in the published proposal focusses on ‘[..] addressing the opacity, complexity, bias […] of certain AI systems’. In a previous blog post, we have written about data and model bias, and here we will discuss how one may improve model transparency in better addressing the ‘explainability’ of AI.
What does ‘explainability’ mean in this context? The Royal Society has identified five key attributes that are desired when deploying AI systems:
- Interpretable: implying some sense of understanding how the technology works.
- Explainable: implying that a wider range of users can understand why, or how, a conclusion was reached.
- Transparent: implying some level of accessibility to the data or algorithm.
- Justifiable: implying there is an understanding of the case in support of a particular outcome.
- Contestable: implying users have the information they need to argue against a decision or classification.
Why ‘Explainable AI’ (XAI):
A level of explainability or interpretability is necessary when considering deploying AI systems. This is in order to:
- Give users confidence in the AI systems. Widespread AI adoption requires trust that the systems work well, and for the benefit of their users.
- Monitor, manage or reduce bias. In almost all AI applications, bias has to be addressed in order to ensure the AI system performs as intended, or at minimum, is aware of the underlying implicit or explicit bias. For example, texts used for training natural language models often perpetrate outdated gender roles.
- Adhere to regulatory standards or policy requirements. AI regulation such as the AIA will curtail the current wide-ranging freedom of AI development and will need to be adhered to.
- Safeguard against vulnerabilities. Understanding the limitations of AI systems can help protect from detrimental decisions. For example, adversarial attacks on image classification systems can fool a system to produce an arbitrary outcome.
We believe the next generation of AI systems will have the ability to explain their “rationale”, characterize their strengths and weaknesses, and convey an understanding of how they will behave in the future. Our strategy for achieving that goal is to develop new or modified machine-learning techniques that will produce more explainable models.
We pursue a variety of techniques in order to generate a portfolio of methods that will provide a range of design options covering the performance-versus-explainability trade space. Two approaches we currently take are:
- Perturbation correlation analysis:
For a given point of interest, e.g., the price prediction for a single stock at a given date, we examine the impact of small changes (perturbations) to the input features on the AI model’s output prediction. This will give insight into the sensitivity of the model to variations in the input data, which may include missing values, wrongly reported fundamental numbers, or previously unobserved market conditions.
- Local surrogate model:
The relationship between inputs and outputs in AI models is often highly complex and non-linear. However, if we make the assumption that in the vicinity of a given point of interest the mapping between inputs and outputs can be reasonably approximated by a linear model, then we can fit a linear surrogate model, which allows us to examine the contribution of each feature to the model’s output at that point.
While essential to achieve explainability in AI models, understanding the limitations of AI applications and managing resulting effects is equally crucial to ensure safe and robust performance. Such systems need to consider the entire pipeline of AI development and implementation, including how objectives for the system are set, what data is used to train and evaluate the models, and what are the implications for the end user and wider society. In our opinion, it is only then can trust in autonomous AI decision makers be built – a crucial step in building sustainable AI deployments.