Skip to content

Commit

Permalink
Shaivi Malik's third blog post (#671)
Browse files Browse the repository at this point in the history
Co-authored-by: Carlos Maltzahn <[email protected]>
  • Loading branch information
shaivimalik and carlosmalt authored Sep 29, 2024
1 parent cfead6d commit 1dc6440
Show file tree
Hide file tree
Showing 4 changed files with 40 additions and 4 deletions.
Original file line number Diff line number Diff line change
Expand Up @@ -22,9 +22,10 @@ image:
Hello everyone! I'm Shaivi Malik, a computer science and engineering student. I am thrilled to announce that I have been selected as a Summer of Reproducibility Fellow. I will be contributing to the [Data leakage in applied ML: reproducing examples of irreproducibility](/project/osre24/nyu/data-leakage/) project under the mentorship of {{% mention ffund %}} and {{% mention MSaeed %}}. You can find my proposal [here](https://drive.google.com/file/d/1WAsDif61O2fWgtkl75bQAnIcm2hryt8z/view?usp=sharing).

This summer, we will reproduce studies from medicine, radiology and genomics. Through these studies, we'll explore and demonstrate three types of data leakage:
1.Pre-processing on train and test sets together
2.Model uses features that are not legitimate
3.Feature selection on training and test sets

1. Pre-processing on train and test sets together
2. Model uses features that are not legitimate
3. Feature selection on training and test sets

For each paper, we will replicate the published results with and without the data leakage error, and present performance metrics for comparison. We will also provide explanatory materials and example questions to test understanding. All these resources will be bundled together in a dedicated repository for each paper.

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -29,7 +29,7 @@ I have been working on reproducing the results from **Characterization of Term a

Reproducing the published results came with its own challenges, including updating EHG-Oversampling to extract meaningful features from EHG signals and finding optimal hyperparameters for the model. Through our work on reproducing the published results and creating toy example notebooks, we have been able to demonstrate that data leakage leads to overly optimistic measures of model performance and models trained with data leakage fail to generalize to real-world data. In such cases, performance on test set doesn't translate to performance in the real-world.

Next, I'll be reproducing the results published in **Identification of COVID-19 Samples from Chest X-Ray Images Using Deep Learning: A Comparison of Transfer Learning Approaches***.
Next, I'll be reproducing the results published in **Identification of COVID-19 Samples from Chest X-Ray Images Using Deep Learning: A Comparison of Transfer Learning Approaches**.

You can follow my work on the EHG paper [here](https://github.com/shaivimalik/medicine_preprocessing-on-entire-dataset).

Expand Down
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Original file line number Diff line number Diff line change
@@ -0,0 +1,35 @@
---
title: "Data Leakage in Applied ML: model uses features that are not legitimate"
subtitle: ""
summary:
authors: [shaivimalik]
tags: ["osre24","reproducibility"]
categories: [SoR]
date: 2024-09-24
lastmod: 2024-09-24
featured: false
draft: false

# Featured image
# To use, add an image named `featured.jpg/png` to your page's folder.
# Focal points: Smart, Center, TopLeft, Top, TopRight, Left, Right, BottomLeft, Bottom, BottomRight.
image:
caption: ""
focal_point: ""
preview_only: false
---


Hello everyone!

I have been working on reproducing the results from **Identification of COVID-19 Samples from Chest X-Ray Images Using Deep Learning: A Comparison of Transfer Learning Approaches**. This study aimed to distinguish COVID-19 cases from normal and pneumonia cases using chest X-ray images. Since my last blog post, we have successfully reproduced the results using the VGG19 model, achieving a 92% accuracy on the test set. However, a significant demographic inconsistency exists: normal and pneumonia chest X-ray images were from pediatric patients, while COVID-19 chest X-ray images were from adults. This allowed the model to achieve high accuracy by learning features that were not clinically relevant.

In [Reproducing “Identification of COVID-19 samples from chest X-Ray images using deep learning: A comparison of transfer learning approaches” without Data Leakage](https://github.com/shaivimalik/covid_illegitimate_features/blob/main/notebooks/Correcting_Original_Result.ipynb), we followed the methodology outlined in the paper, but with a key change: we used datasets containing adult chest X-ray images. This time, the model achieved an accuracy of 51%, a 41% drop from the earlier results, confirming that the metrics reported in the paper were overly optimistic due to data leakage, where the model learned illegitimate features.

![GradCAM from husky vs wolf example ](gradcam.png)

To further illustrate this issue, we created a [toy example](https://github.com/shaivimalik/covid_illegitimate_features/blob/main/notebooks/Exploring_ConvNet_Activations.ipynb) demonstrating how a model can learn illegitimate features. Using a small dataset of wolf and husky images, the model achieved an accuracy of 90%. We then revealed that this performance was due to a data leakage issue: all wolf images had snowy backgrounds, while husky images had grassy backgrounds. When we trained the model on a dataset where both wolf and husky images had white backgrounds, the accuracy dropped to 70%. This shows that the accuracy obtained earlier was an overly optimistic measure due to data leakage.

You can explore our work on the COVID-19 paper [here](https://github.com/shaivimalik/covid_illegitimate_features).

Lastly, I would like to thank {{% mention ffund %}} and {{% mention MSaeed %}} for their support and guidance throughout my SoR journey.

0 comments on commit 1dc6440

Please sign in to comment.