Skip to content

Commit

Permalink
add remaining type hints to fairness episode
Browse files Browse the repository at this point in the history
  • Loading branch information
annapmeyer committed Oct 17, 2024
1 parent 7a94bcf commit 11a1654
Show file tree
Hide file tree
Showing 2 changed files with 170 additions and 149 deletions.
312 changes: 166 additions & 146 deletions code/fairness.ipynb

Large diffs are not rendered by default.

7 changes: 4 additions & 3 deletions episodes/3-model-fairness-deep-dive.md
Original file line number Diff line number Diff line change
Expand Up @@ -91,7 +91,7 @@ dataset_orig_panel19_train.convert_to_dataframe()[0].head()
Show details about the data.

```python
def describe(train=None, val=None, test=None) -> None:
def describe(train:MEPSDataset19=None, val:MEPSDataset19=None, test:MEPSDataset19=None) -> None:
'''
Print information about the test dataset (and train and validation dataset, if
provided). Prints the dataset shape, favorable and unfavorable labels,
Expand Down Expand Up @@ -151,6 +151,7 @@ We will train a logistic regression classifier. To do so, we have to import vari
from sklearn.preprocessing import StandardScaler
from sklearn.linear_model import LogisticRegression
from sklearn.pipeline import make_pipeline # allows to stack modeling steps
from sklearn.pipeline import Pipeline # allow us to reference the Pipeline object type
```

```python
Expand Down Expand Up @@ -179,7 +180,7 @@ from aif360.metrics import ClassificationMetric
```

```python
def test(dataset, model, thresh_arr: np.ndarray) -> dict:
def test(dataset: MEPSDataset19, model:Pipeline, thresh_arr: np.ndarray) -> dict:
'''
Given a dataset, model, and list of potential cutoff thresholds, compute various metrics
for the model. Returns a dictionary of the metrics, including balanced accuracy, average odds
Expand Down Expand Up @@ -440,7 +441,7 @@ Then, we'll create a helper function, `mini_test` to allow us to call the `descr
After that, we call the ThresholdOptimizer's predict function on the validation and test data, and then compute metrics and print the results.

```python
def mini_test(dataset, preds:np.ndarray) -> dict:
def mini_test(dataset:MEPSDataset19, preds:np.ndarray) -> dict:
'''
Given a dataset and predictions, compute various metrics for the model. Returns a dictionary of the metrics,
including balanced accuracy, average odds difference, disparate impact, statistical parity difference, equal
Expand Down

0 comments on commit 11a1654

Please sign in to comment.