AI has been making rapid advances, and even if we choose not to use AI, it touches our lives in many ways.
Over the years, AI models have become very complex. Some of the models have more than 100 million parameters. If we use such a complex model, it is hard to explain how the model arrived at its results.
Why bother about model interpretability?
If we use AI to solve a problem like recommending products to customers or sort mails based on postal code, we do not need to worry about the model’s interpretability. But if we are dealing with decisions that impact people in a significant way, we not only want the model to be fair, but also the able to explain the decision-making process.
Here are some examples where we need to explain the rationale behind the decision to the people involved
- Credit decisions
- Forensic analysis
- College admissions
- Medicine research
- Demand from regulatory bodies
The need for an interpretable AI is quite real. In 2018, Amazon scrapped an AI-based resume selection tool because it showed a bias against women. Any model is only as good as the data we use to train it. So, the demand for interpretable AI is healthy not just for society but also for business.
There are many approaches to interpreting a complex model. I will explain two popular methods.
Local Interpretable Model-Agnostic Explanations (LIME)
A complex model means that the decision boundary is non-linear. For the sake of simplicity, let us assume that we have only two input variables, and we want to classify the data points into two classes. This simple assumption will help us with easy visualisation. Let us look at the following diagram.
In the diagram above, let us assume that we have a data set of people with two input variables, Age and Income, and we want to classify if a person has diabetes or not. A red dot means that the person has diabetes and the green one indicates that the person does not have diabetes. You would notice that the decision boundary is non-linear.
If we need to explain why a person has diabetes, then we can create a proxy function which is linear and works well in a small region.
The red straight line at the bottom right is the proxy decision boundary. Note that this linear proxy is local (hence the word local in LIME). For points that are not in the vicinity of the proxy function, we will need another proxy function.
Shapley Additive Explanations (SHAP)
The idea of SHAP is an extension of Shapley Values which were coined by Lloyd Shapley in 1974. The concept of SHAP is borrowed from game theory. Imagine a game of rowing where there are five rowers in each boat. Once the game is over, how should the prize money be divided among the winning team members?
You could think of an AI program as a similar collaborative game. In our example, we can think of Age and Income as the players and the ‘decision’ of being diabetic or non-diabetic as an outcome. Using SHAP, for every outcome at the local level or all outcomes at the global level, we can assign a percentage for each variable (i.e. Age and Income). The math behind SHAP is a bit involved so I will not elaborate it here.
If you are interested to know more about interpretable AI, please reach out to us.