Skip to content

Commit

Permalink
update explanation dashboard notebook
Browse files Browse the repository at this point in the history
  • Loading branch information
JessicaXYWang committed Sep 26, 2023
1 parent ce9b01f commit 12cdec8
Showing 1 changed file with 61 additions and 61 deletions.
122 changes: 61 additions & 61 deletions docs/Explore Algorithms/Responsible AI/Explanation Dashboard.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -2,31 +2,34 @@
"cells": [
{
"cell_type": "markdown",
"metadata": {
"collapsed": false
},
"source": [
"## Interpretability - Explanation Dashboard\n",
"\n",
"In this example, similar to the \"Interpretability - Tabular SHAP explainer\" notebook, we use Kernel SHAP to explain a tabular classification model built from the Adults Census dataset and then visualize the explanation in the ExplanationDashboard from https://github.com/microsoft/responsible-ai-widgets.\n",
"\n",
"First we import the packages and define some UDFs we will need later."
],
"metadata": {
"collapsed": false
}
]
},
{
"cell_type": "code",
"execution_count": null,
"outputs": [],
"source": [
"%pip install raiwidgets itsdangerous==2.0.1 interpret-community"
],
"metadata": {
"collapsed": false
}
},
"outputs": [],
"source": [
"%pip install raiwidgets itsdangerous==2.0.1 interpret-community numpy==1.21.6"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"from IPython.terminal.interactiveshell import TerminalInteractiveShell\n",
Expand All @@ -40,23 +43,23 @@
"\n",
"vec_access = udf(lambda v, i: float(v[i]), FloatType())\n",
"vec2array = udf(lambda vec: vec.toArray().tolist(), ArrayType(FloatType()))"
],
"metadata": {
"collapsed": false
}
]
},
{
"cell_type": "markdown",
"source": [
"Now let's read the data and train a simple binary classification model."
],
"metadata": {
"collapsed": false
}
},
"source": [
"Now let's read the data and train a simple binary classification model."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"df = spark.read.parquet(\n",
Expand Down Expand Up @@ -102,46 +105,46 @@
"lr = LogisticRegression(featuresCol=\"features\", labelCol=\"label\", weightCol=\"fnlwgt\")\n",
"pipeline = Pipeline(stages=[strIndexer, onehotEnc, vectAssem, lr])\n",
"model = pipeline.fit(training)"
],
"metadata": {
"collapsed": false
}
]
},
{
"cell_type": "markdown",
"source": [
"After the model is trained, we randomly select some observations to be explained."
],
"metadata": {
"collapsed": false
}
},
"source": [
"After the model is trained, we randomly select some observations to be explained."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"explain_instances = (\n",
" model.transform(training).orderBy(rand()).limit(5).repartition(200).cache()\n",
")\n",
"display(explain_instances)"
],
"metadata": {
"collapsed": false
}
]
},
{
"cell_type": "markdown",
"source": [
"We create a TabularSHAP explainer, set the input columns to all the features the model takes, specify the model and the target output column we are trying to explain. In this case, we are trying to explain the \"probability\" output which is a vector of length 2, and we are only looking at class 1 probability. Specify targetClasses to `[0, 1]` if you want to explain class 0 and 1 probability at the same time. Finally we sample 100 rows from the training data for background data, which is used for integrating out features in Kernel SHAP."
],
"metadata": {
"collapsed": false
}
},
"source": [
"We create a TabularSHAP explainer, set the input columns to all the features the model takes, specify the model and the target output column we are trying to explain. In this case, we are trying to explain the \"probability\" output which is a vector of length 2, and we are only looking at class 1 probability. Specify targetClasses to `[0, 1]` if you want to explain class 0 and 1 probability at the same time. Finally we sample 100 rows from the training data for background data, which is used for integrating out features in Kernel SHAP."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"shap = TabularSHAP(\n",
Expand All @@ -155,24 +158,24 @@
")\n",
"\n",
"shap_df = shap.transform(explain_instances)"
],
"metadata": {
"collapsed": false
}
]
},
{
"cell_type": "markdown",
"metadata": {
"collapsed": false
},
"source": [
"Once we have the resulting dataframe, we extract the class 1 probability of the model output, the SHAP values for the target class, the original features and the true label. Then we convert it to a pandas dataframe for visualization.\n",
"For each observation, the first element in the SHAP values vector is the base value (the mean output of the background dataset), and each of the following element is the SHAP values for each feature."
],
"metadata": {
"collapsed": false
}
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"shaps = (\n",
Expand All @@ -187,23 +190,23 @@
"shaps_local.sort_values(\"probability\", ascending=False, inplace=True, ignore_index=True)\n",
"pd.set_option(\"display.max_colwidth\", None)\n",
"shaps_local"
],
"metadata": {
"collapsed": false
}
]
},
{
"cell_type": "markdown",
"source": [
"We can visualize the explanation in the [interpret-community format](https://github.com/interpretml/interpret-community) in the ExplanationDashboard from https://github.com/microsoft/responsible-ai-widgets/"
],
"metadata": {
"collapsed": false
}
},
"source": [
"We can visualize the explanation in the [interpret-community format](https://github.com/interpretml/interpret-community) in the ExplanationDashboard from https://github.com/microsoft/responsible-ai-widgets/"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"import numpy as np\n",
Expand All @@ -216,14 +219,14 @@
"local_importance_values = shaps_local[[\"shapValues\"]]\n",
"eval_data = shaps_local[features]\n",
"true_y = np.array(shaps_local[[\"label\"]])"
],
"metadata": {
"collapsed": false
}
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"list_local_importance_values = local_importance_values.values.tolist()\n",
Expand All @@ -236,19 +239,16 @@
" # remove the bias from local importance values\n",
" del converted_list[0]\n",
" converted_importance_values.append(converted_list)"
],
"metadata": {
"collapsed": false
}
]
},
{
"cell_type": "markdown",
"source": [
"When running Synapse Analytics, please follow instructions here [Package management - Azure Synapse Analytics | Microsoft Docs](https://docs.microsoft.com/en-us/azure/synapse-analytics/spark/apache-spark-azure-portal-add-libraries) to install [\"raiwidgets\"](https://pypi.org/project/raiwidgets/) and [\"interpret-community\"](https://pypi.org/project/interpret-community/) packages."
],
"metadata": {
"collapsed": false
}
},
"source": [
"When running Synapse Analytics, please follow instructions here [Package management - Azure Synapse Analytics | Microsoft Docs](https://docs.microsoft.com/en-us/azure/synapse-analytics/spark/apache-spark-azure-portal-add-libraries) to install [\"raiwidgets\"](https://pypi.org/project/raiwidgets/) and [\"interpret-community\"](https://pypi.org/project/interpret-community/) packages."
]
},
{
"cell_type": "code",
Expand Down

0 comments on commit 12cdec8

Please sign in to comment.