This repository contains content of a two-part workshop for using machine learning interpretability and fairness assessment (+ unfairness mitigation) to build fairer and more transparent models. The different components of the workshop are as follows:
- Part 1: Interpretability with glassbox models (EBM)
- Part 2: Explain blackbox models with SHAP (and upload explanations to Azure Machine Learning)
- Part 3: Run Interpretability on Azure Machine Learning
- Part 4: Model fairness assessment and unfairness mitigation
-
Provision your personal Lab environment
- Open Registration URL: http://bit.ly/2OjknZW
- Enter Activation Code which should be provided by the instructors of the workshop.
- Fill out registration form and Submit it.
- On the next screen click Launch Lab.
- Wait until your personal environment is provisioned. It should take approximatly 3-5 minutes.
-
Login to Azure ML studio
- Once the workshop enviroment is ready, you can open new browser tab and navigate to Azure ML studio, using it's direct URL: https://ml.azure.com. We recommend to use Private Browser window for the login to avoid conflicting credentials if you already have Azure subscription.
- Use credentials provided in the workshop environment to sign-in to Azure ML studio.
- In the Welcome screen select preprovisioned subcription and workspace similar to screenshot below:
- Click Get started!
- In the welcome screen click on Take a quick tour button to familiarize yourself with Azure ML studio.
-
Create Azure Machine Learning Notebook VM
- Click on Compute tab on the left navigation bar.
- In the Notebook VM section, click New.
- Enter Notebook VM name of your choice and click Create. Creation should take approximately 5 minutes.
-
Clone this repository to Notebook VM in your Azure ML workspace
- Once Notebook VM is created and in Running state, click on the Jupyter link. This will open Jupyter web UI in new browser tab.
- In Jupyter UI click New > Terminal.
- In terminal window, type and execute command:
ls
- Notice the name of your user folder and use that name to execute next command:
cd <userfolder>
- Clone the repository of this workshop by executing following command:
git clone https://github.com/microsoft/responsibleai-airlift.git
-
Open Part 1 of the workshop
- Go back to the Jupyter window.
- Open Interpretability with glassbox models (EBM) notebook.
You are ready to start your workshop! Have fun.