This repo aims to present different techniques for approaching model interpretation and fairness in Machine Learning black-box models.
An explanation of model interpretability techniques can be found in this post
Techniques covered:
-
Model Interpretation
- Global Importance
- Feature Importance (evaluated by the XGBoost model and by SHAP)
- Summary Plot (SHAP)
- Permutation Importance (ELI5)
- Partial Dependence Plot (evaluated by PDPBox and by SHAP)
- Global Surrogate Model (Decision Tree and Logistic Regression)
- Local Importance
- Local Interpretable Model-agnostic Explanations (LIME)
- SHapley Additive exPlanations (SHAP)
- Force Plot
- Decision Plot
- Global Importance
-
Model Fairness
- FairML