SHAP Is Not All You Need

Three blog posts about SHAP and alternatives to consider from Christoph Molnar and Giles Hooker
Published

June 18, 2024

Modified

June 20, 2024

Original Blog Post: SHAP is the Blockchain of xAI
Author: Giles Hooker
Published: 2022-05-12

Original Blog Post: SHAP Is Not All You Need
Author: Christoph Molnar
Published: 2023-02-07

Original Blog Post: Shedding light on “Impossibility Theorems for Feature Attribution”
Author: Christoph Molnar
Published: 2024-06-18

My summary:

Here are three blog posts that give generally two different perspectives on SHAP (if you are unfamiliar with SHAP, start with the first post, which gives some background). The first two posts give some criticism of SHAP, at least in part, in response to feelings from the authors that some data science practitioners treat SHAP as the main model explainability algorithm out there and that SHAP is all you need. The third post above is written by one of the same authors as one of the critical posts, but actually comes to SHAP’s defense.

The bottom line from these posts is that it is important to first understand what it is you are looking to obtain in terms of model explainability / interpretability. In some scenarios, SHAP may be useful and give the insight you are looking for, whereas in other scenarios it will fall short. From the second post:

SHAP importance is more about auditing how the model behaves.

From the third post, Christoph talks about the situation where you want to know: “How does the prediction for a certain data point change when we change the features just a little bit?” He points out that SHAP is not appropriate for this situation.