Skip to content

Kibria2017/interpretable_and_fair_ml

 
 

Repository files navigation

Interpretability and Fairness in Machine Learning

This repo aims to present different techniques for approaching model interpretation and fairness in Machine Learning black-box models.

An explanation of model interpretability techniques can be found in this post

Techniques covered:

  • Model Interpretation

    • Global Importance
      1. Feature Importance (evaluated by the XGBoost model and by SHAP)
      2. Summary Plot (SHAP)
      3. Permutation Importance (ELI5)
      4. Partial Dependence Plot (evaluated by PDPBox and by SHAP)
      5. Global Surrogate Model (Decision Tree and Logistic Regression)
    • Local Importance
      1. Local Interpretable Model-agnostic Explanations (LIME)
      2. SHapley Additive exPlanations (SHAP)
        • Force Plot
        • Decision Plot
  • Model Fairness

    • FairML

About

Interpretability and Fairness in Machine Learning

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Jupyter Notebook 100.0%