Page4/22
SHAP (SHapley Additive exPlanations) · Page 1 of 2
The Black Box Problem & SHAP
SHAP — Understanding Model Predictions
Why Explain?
Deep Learning models are "black boxes" — we feed in data and get predictions, but we don't know why. In regulated fields (healthcare, finance), this is unacceptable.
SHAP assigns credit to each feature: "Feature X contributed +0.3 to this prediction, Feature Y contributed -0.1, etc."
The Shapley Value (Game Theory)
Imagine a coalition of 5 players splitting a $100 prize. How much should each player get?
- Solution: Each player gets their marginal contribution.
- Example: Player A adds $30 to the team → Player A gets $30.
In ML:
- Players = Features
- Prize = Prediction value
- Each feature's SHAP value = its marginal contribution to the final prediction.
SHAP vs Feature Importance
| Aspect | Feature Importance | SHAP |
|---|---|---|
| What it shows | Which features matter (globally) | How much each feature pushed prediction up/down (locally, per sample) |
| Use case | Understand model behavior overall | Explain a specific prediction |
| Complexity | Simple, fast | Computationally expensive |
SHAP Force Plot
A horizontal bar chart showing:
- Red bars (positive): Features pushing prediction towards 1
- Blue bars (negative): Features pushing prediction towards 0
- Baseline: Model's average prediction
main.py
Loading...
OUTPUT
▶Click "Run Code" to execute…