4/22
SHAP (SHapley Additive exPlanations) · Page 1 of 2

The Black Box Problem & SHAP

SHAP — Understanding Model Predictions

Why Explain?

Deep Learning models are "black boxes" — we feed in data and get predictions, but we don't know why. In regulated fields (healthcare, finance), this is unacceptable.

SHAP assigns credit to each feature: "Feature X contributed +0.3 to this prediction, Feature Y contributed -0.1, etc."

The Shapley Value (Game Theory)

Imagine a coalition of 5 players splitting a $100 prize. How much should each player get?

  • Solution: Each player gets their marginal contribution.
  • Example: Player A adds $30 to the team → Player A gets $30.

In ML:

  • Players = Features
  • Prize = Prediction value
  • Each feature's SHAP value = its marginal contribution to the final prediction.

SHAP vs Feature Importance

AspectFeature ImportanceSHAP
What it showsWhich features matter (globally)How much each feature pushed prediction up/down (locally, per sample)
Use caseUnderstand model behavior overallExplain a specific prediction
ComplexitySimple, fastComputationally expensive

SHAP Force Plot

A horizontal bar chart showing:

  • Red bars (positive): Features pushing prediction towards 1
  • Blue bars (negative): Features pushing prediction towards 0
  • Baseline: Model's average prediction
main.py
Loading...
OUTPUT
Click "Run Code" to execute…