LIME (Local Interpretable Model-agnostic Explanations) · Page 1 of 1

LIME vs SHAP

LIME — Local Interpretability

The Key Difference: Local vs Global

SHAP: Global Credit Assignment

  • Shapley values apply game theory
  • Guarantees mathematical correctness
  • Slower computationally
  • Best for: Understanding why a model made a specific decision

LIME: Local Approximation

  • Approximates complex model with simple linear model locally
  • Model-agnostic (works with ANY model)
  • Fast and computationally cheap
  • Best for: Quick explanations for non-technical users

How LIME Works

  1. Perturb: Add noise to the input (slightly change features)
  2. Predict: Run all perturbed inputs through the black-box model
  3. Weight: Weight perturbed samples by distance from original
  4. Fit: Fit a simple linear model (Logistic Regression) on perturbed data
  5. Explain: The linear model coefficients are LIME's explanation!

Intuition:

"I can't understand why this complex neural network said 'cat', so I'll fit a simple line around that prediction and see which features mattered."

LIME Force Plot Example

Original prediction: Classifier says "DOG" (probability 0.95)

LIME approximation using a simple rule:

  • Feature "has_fur" = strong positive contributor
  • Feature "has_tail" = moderate positive contributor
  • Feature "meows" = strong negative contributor
main.py
Loading...
OUTPUT
Click "Run Code" to execute…