diff --git a/docs/Explore Algorithms/Responsible AI/Explanation Dashboard.ipynb b/docs/Explore Algorithms/Responsible AI/Explanation Dashboard.ipynb index f543dc78f5..4b7211fb09 100644 --- a/docs/Explore Algorithms/Responsible AI/Explanation Dashboard.ipynb +++ b/docs/Explore Algorithms/Responsible AI/Explanation Dashboard.ipynb @@ -2,31 +2,34 @@ "cells": [ { "cell_type": "markdown", + "metadata": { + "collapsed": false + }, "source": [ "## Interpretability - Explanation Dashboard\n", "\n", "In this example, similar to the \"Interpretability - Tabular SHAP explainer\" notebook, we use Kernel SHAP to explain a tabular classification model built from the Adults Census dataset and then visualize the explanation in the ExplanationDashboard from https://github.com/microsoft/responsible-ai-widgets.\n", "\n", "First we import the packages and define some UDFs we will need later." - ], - "metadata": { - "collapsed": false - } + ] }, { "cell_type": "code", "execution_count": null, - "outputs": [], - "source": [ - "%pip install raiwidgets itsdangerous==2.0.1 interpret-community" - ], "metadata": { "collapsed": false - } + }, + "outputs": [], + "source": [ + "%pip install raiwidgets itsdangerous==2.0.1 interpret-community numpy==1.21.6" + ] }, { "cell_type": "code", "execution_count": null, + "metadata": { + "collapsed": false + }, "outputs": [], "source": [ "from IPython.terminal.interactiveshell import TerminalInteractiveShell\n", @@ -40,23 +43,23 @@ "\n", "vec_access = udf(lambda v, i: float(v[i]), FloatType())\n", "vec2array = udf(lambda vec: vec.toArray().tolist(), ArrayType(FloatType()))" - ], - "metadata": { - "collapsed": false - } + ] }, { "cell_type": "markdown", - "source": [ - "Now let's read the data and train a simple binary classification model." - ], "metadata": { "collapsed": false - } + }, + "source": [ + "Now let's read the data and train a simple binary classification model." + ] }, { "cell_type": "code", "execution_count": null, + "metadata": { + "collapsed": false + }, "outputs": [], "source": [ "df = spark.read.parquet(\n", @@ -102,46 +105,46 @@ "lr = LogisticRegression(featuresCol=\"features\", labelCol=\"label\", weightCol=\"fnlwgt\")\n", "pipeline = Pipeline(stages=[strIndexer, onehotEnc, vectAssem, lr])\n", "model = pipeline.fit(training)" - ], - "metadata": { - "collapsed": false - } + ] }, { "cell_type": "markdown", - "source": [ - "After the model is trained, we randomly select some observations to be explained." - ], "metadata": { "collapsed": false - } + }, + "source": [ + "After the model is trained, we randomly select some observations to be explained." + ] }, { "cell_type": "code", "execution_count": null, + "metadata": { + "collapsed": false + }, "outputs": [], "source": [ "explain_instances = (\n", " model.transform(training).orderBy(rand()).limit(5).repartition(200).cache()\n", ")\n", "display(explain_instances)" - ], - "metadata": { - "collapsed": false - } + ] }, { "cell_type": "markdown", - "source": [ - "We create a TabularSHAP explainer, set the input columns to all the features the model takes, specify the model and the target output column we are trying to explain. In this case, we are trying to explain the \"probability\" output which is a vector of length 2, and we are only looking at class 1 probability. Specify targetClasses to `[0, 1]` if you want to explain class 0 and 1 probability at the same time. Finally we sample 100 rows from the training data for background data, which is used for integrating out features in Kernel SHAP." - ], "metadata": { "collapsed": false - } + }, + "source": [ + "We create a TabularSHAP explainer, set the input columns to all the features the model takes, specify the model and the target output column we are trying to explain. In this case, we are trying to explain the \"probability\" output which is a vector of length 2, and we are only looking at class 1 probability. Specify targetClasses to `[0, 1]` if you want to explain class 0 and 1 probability at the same time. Finally we sample 100 rows from the training data for background data, which is used for integrating out features in Kernel SHAP." + ] }, { "cell_type": "code", "execution_count": null, + "metadata": { + "collapsed": false + }, "outputs": [], "source": [ "shap = TabularSHAP(\n", @@ -155,24 +158,24 @@ ")\n", "\n", "shap_df = shap.transform(explain_instances)" - ], - "metadata": { - "collapsed": false - } + ] }, { "cell_type": "markdown", + "metadata": { + "collapsed": false + }, "source": [ "Once we have the resulting dataframe, we extract the class 1 probability of the model output, the SHAP values for the target class, the original features and the true label. Then we convert it to a pandas dataframe for visualization.\n", "For each observation, the first element in the SHAP values vector is the base value (the mean output of the background dataset), and each of the following element is the SHAP values for each feature." - ], - "metadata": { - "collapsed": false - } + ] }, { "cell_type": "code", "execution_count": null, + "metadata": { + "collapsed": false + }, "outputs": [], "source": [ "shaps = (\n", @@ -187,23 +190,23 @@ "shaps_local.sort_values(\"probability\", ascending=False, inplace=True, ignore_index=True)\n", "pd.set_option(\"display.max_colwidth\", None)\n", "shaps_local" - ], - "metadata": { - "collapsed": false - } + ] }, { "cell_type": "markdown", - "source": [ - "We can visualize the explanation in the [interpret-community format](https://github.com/interpretml/interpret-community) in the ExplanationDashboard from https://github.com/microsoft/responsible-ai-widgets/" - ], "metadata": { "collapsed": false - } + }, + "source": [ + "We can visualize the explanation in the [interpret-community format](https://github.com/interpretml/interpret-community) in the ExplanationDashboard from https://github.com/microsoft/responsible-ai-widgets/" + ] }, { "cell_type": "code", "execution_count": null, + "metadata": { + "collapsed": false + }, "outputs": [], "source": [ "import numpy as np\n", @@ -216,14 +219,14 @@ "local_importance_values = shaps_local[[\"shapValues\"]]\n", "eval_data = shaps_local[features]\n", "true_y = np.array(shaps_local[[\"label\"]])" - ], - "metadata": { - "collapsed": false - } + ] }, { "cell_type": "code", "execution_count": null, + "metadata": { + "collapsed": false + }, "outputs": [], "source": [ "list_local_importance_values = local_importance_values.values.tolist()\n", @@ -236,19 +239,16 @@ " # remove the bias from local importance values\n", " del converted_list[0]\n", " converted_importance_values.append(converted_list)" - ], - "metadata": { - "collapsed": false - } + ] }, { "cell_type": "markdown", - "source": [ - "When running Synapse Analytics, please follow instructions here [Package management - Azure Synapse Analytics | Microsoft Docs](https://docs.microsoft.com/en-us/azure/synapse-analytics/spark/apache-spark-azure-portal-add-libraries) to install [\"raiwidgets\"](https://pypi.org/project/raiwidgets/) and [\"interpret-community\"](https://pypi.org/project/interpret-community/) packages." - ], "metadata": { "collapsed": false - } + }, + "source": [ + "When running Synapse Analytics, please follow instructions here [Package management - Azure Synapse Analytics | Microsoft Docs](https://docs.microsoft.com/en-us/azure/synapse-analytics/spark/apache-spark-azure-portal-add-libraries) to install [\"raiwidgets\"](https://pypi.org/project/raiwidgets/) and [\"interpret-community\"](https://pypi.org/project/interpret-community/) packages." + ] }, { "cell_type": "code",