Skip to content

Commit

Permalink
Merge pull request #291 from AIM-Harvard/v1
Browse files Browse the repository at this point in the history
Update docs
  • Loading branch information
surajpaib authored Mar 24, 2024
2 parents 60d55db + 069e4cf commit ef038af
Show file tree
Hide file tree
Showing 17 changed files with 308 additions and 87 deletions.
19 changes: 13 additions & 6 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,15 +1,22 @@

<center>
This is the the official repository for the paper: "<b>Foundation Model for Cancer Imaging Biomarkers </b>"
This is the the official repository for the paper

<br/>

</center>
<div style="display: flex; justify-content: center"><img src="docs/assets/Header.png"/>
<div style="display: flex; flex-direction: column; align-items: center;">
<img src="./docs/assets/Header.png" style="width: 100%;"/>
<div style="display: flex; justify-content: space-between; width: 100%;">
<img src="./docs/assets/Mhub_image.png" style="width: 50%;"/>
<img src="./docs/assets/Mhub_image2.png" style="width: 50%;"/>
</div>
<a href="https://www.nature.com/articles/s42256-024-00807-9"><img src="./docs/assets/readpaper_logo.png" style="width: 100%;"></a>
</div>

<br/><br/>
<div align="center">

<i><font size="-1">Suraj Pai, Dennis Bontempi, Ibrahim Hadzic, Vasco Prudente, Mateo Sokač, Tafadzwa L. Chaunzwa, Simon Bernatz, Ahmed Hosny, Raymond H Mak, Nicolai J Birkbak, Hugo JWL Aerts</i></font>


[![Build Status](https://github.com/AIM-Harvard/foundation-cancer-image-biomarker/actions/workflows/build.yml/badge.svg)](https://github.com/AIM-Harvard/foundation-cancer-image-biomarker/actions/workflows/build.yml)
[![Python Version](https://img.shields.io/pypi/pyversions/foundation-cancer-image-biomarker.svg)](https://pypi.org/project/foundation-cancer-image-biomarker/)
[![Dependencies Status](https://img.shields.io/badge/dependencies-up%20to%20date-brightgreen.svg)](https://github.com/AIM-Harvard/foundation-cancer-image-biomarker/pulls?utf8=%E2%9C%93&q=is%3Apr%20author%3Aapp%2Fdependabot)
Expand All @@ -23,7 +30,7 @@ This is the the official repository for the paper: "<b>Foundation Model for Canc
</div>

---
**NOTE: **
**NOTE:**
For detailed documentation check our [website](https://aim-harvard.github.io/foundation-cancer-image-biomarker/)

---
Binary file added docs/assets/Mhub_image.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added docs/assets/Mhub_image2.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added docs/assets/readpaper_logo.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
19 changes: 9 additions & 10 deletions docs/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,14 +3,13 @@ hide:
- title
---
#

<center>
This is the the official documentation for the paper: "<b>Foundation Model for Cancer Imaging Biomarkers </b>"
</center>
<div style="display: flex;"><img src="assets/Header.png"/></div>


<i><font size="-1">Suraj Pai, Dennis Bontempi, Ibrahim Hadzic, Vasco Prudente, Mateo Sokač, Tafadzwa L. Chaunzwa, Simon Bernatz, Ahmed Hosny, Raymond H Mak, Nicolai J Birkbak, Hugo JWL Aerts</i></font>
<div style="display: flex; flex-direction: column; align-items: center;">
<img src="assets/Header.png" style="width: 100%;"/>
<div style="display: flex; justify-content: space-between; width: 100%;">
<img src="assets/Mhub_image.png" style="width: 50%;"/>
<img src="assets/Mhub_image2.png" style="width: 50%;"/>
</div>
</div>


## Documentation Walkthrough
Expand All @@ -20,11 +19,11 @@ This is the the official documentation for the paper: "<b>Foundation Model for C
!!! note
[We also provide quickstart examples that run in a free-cloud based environment](./getting-started/cloud-quick-start.md) (through Google Colab) so you can get familiar with our workflows, without having to download anything on your local machine!!

[Replication Guide](./user-guide/data.md)<br> If you would like to pre-train a foundation model on your own unannotated data or would like to replicate the training and evaluation from our study, see here.
[Replication Guide](./replication-guide/data.md)<br> If you would like to pre-train a foundation model on your own unannotated data or would like to replicate the training and evaluation from our study, see here.

[Tutorials](https://github.com/AIM-Harvard/foundation-cancer-image-biomarker/tree/master/tutorials)<br> We provide comprehensive tutorials that use the foundation model for cancer imaging biomarkers and compare against other popularly used methods. If you would like to build your own study using our foundation model, these set of tutorials are highly recommended as the starting point.

[API Docs](./api_docs/fmcib/index.html) <br> This is for the more advanced user who would like to deep-dive into different methods and classes provided by our package.
[API Docs](./reference/run) <br> This is for the more advanced user who would like to deep-dive into different methods and classes provided by our package.


## License
Expand Down
File renamed without changes.
2 changes: 2 additions & 0 deletions docs/replication-guide/baselines.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,2 @@
# Reproduce Baselines
:hourglass_flowing_sand: Coming soon! :hourglass_flowing_sand:
2 changes: 1 addition & 1 deletion docs/user-guide/data.md → docs/replication-guide/data.md
Original file line number Diff line number Diff line change
Expand Up @@ -62,7 +62,7 @@ bash luna16.sh <path_to_download>
The easiest way to download the LUNG1 and RADIO datasets is through s5cmd and [IDC manifests](https://learn.canceridc.dev/data/downloading-data)
For convenience, the manifests for each of the already been provided in `data/download` under `nsclc_radiomics.csv` for LUNG1 and `nsclc_radiogenomics.`

First, you'll need to install `s5cmd`. Follow the instructions here: https://github.com/peak/s5cmd?tab=readme-ov-file#installation
First, you'll need to install `s5cmd`. Follow the instructions [here]https://github.com/peak/s5cmd?tab=readme-ov-file#installation

Once you have s5cmd installed, run

Expand Down
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
34 changes: 0 additions & 34 deletions docs/user-guide/reproduce_baselines.md

This file was deleted.

65 changes: 37 additions & 28 deletions fmcib/visualization/verify_io.py
Original file line number Diff line number Diff line change
Expand Up @@ -17,45 +17,54 @@ def visualize_seed_point(row):
None
"""
# Define the transformation pipeline
is_label_provided = "label_path" in row
keys = ["image_path", "label_path"] if is_label_provided else ["image_path"]
all_keys = keys if is_label_provided else ["image_path", "coordX", "coordY", "coordZ"]

T = monai_transforms.Compose(
[
monai_transforms.LoadImaged(keys=["image_path"], image_only=True, reader="ITKReader"),
monai_transforms.EnsureChannelFirstd(keys=["image_path"]),
monai_transforms.Spacingd(keys=["image_path"], pixdim=1, mode="bilinear", align_corners=True, diagonal=True),
monai_transforms.LoadImaged(keys=keys, image_only=True, reader="ITKReader"),
monai_transforms.EnsureChannelFirstd(keys=keys),
monai_transforms.Spacingd(keys=keys, pixdim=1, mode="bilinear", align_corners=True, diagonal=True),
monai_transforms.ScaleIntensityRanged(keys=["image_path"], a_min=-1024, a_max=3072, b_min=0, b_max=1, clip=True),
monai_transforms.Orientationd(keys=["image_path"], axcodes="LPS"),
monai_transforms.SelectItemsd(keys=["image_path", "coordX", "coordY", "coordZ"]),
monai_transforms.Orientationd(keys=keys, axcodes="LPS"),
monai_transforms.SelectItemsd(keys=all_keys),
]
)

# Apply the transformation pipeline
out = T(row)

# Calculate the center of the image
center = (-out["coordX"], -out["coordY"], out["coordZ"])
center = np.linalg.inv(np.array(out["image_path"].affine)) @ np.array(center + (1,))
center = [int(x) for x in center[:3]]

# Define the image and label
image = out["image_path"]
label = torch.zeros_like(image)

# Define the dimensions of the image and the patch
C, H, W, D = image.shape
Ph, Pw, Pd = 50, 50, 50

# Calculate and clamp the ranges for cropping
min_h, max_h = max(center[0] - Ph // 2, 0), min(center[0] + Ph // 2, H)
min_w, max_w = max(center[1] - Pw // 2, 0), min(center[1] + Pw // 2, W)
min_d, max_d = max(center[2] - Pd // 2, 0), min(center[2] + Pd // 2, D)

# Check if coordinates are valid
assert min_h < max_h, "Invalid coordinates: min_h >= max_h"
assert min_w < max_w, "Invalid coordinates: min_w >= max_w"
assert min_d < max_d, "Invalid coordinates: min_d >= max_d"

# Define the label for the cropped region
label[:, min_h:max_h, min_w:max_w, min_d:max_d] = 1
if not is_label_provided:
center = (-out["coordX"], -out["coordY"], out["coordZ"])
center = np.linalg.inv(np.array(out["image_path"].affine)) @ np.array(center + (1,))
center = [int(x) for x in center[:3]]

# Define the image and label
label = torch.zeros_like(image)

# Define the dimensions of the image and the patch
C, H, W, D = image.shape
Ph, Pw, Pd = 50, 50, 50

# Calculate and clamp the ranges for cropping
min_h, max_h = max(center[0] - Ph // 2, 0), min(center[0] + Ph // 2, H)
min_w, max_w = max(center[1] - Pw // 2, 0), min(center[1] + Pw // 2, W)
min_d, max_d = max(center[2] - Pd // 2, 0), min(center[2] + Pd // 2, D)

# Check if coordinates are valid
assert min_h < max_h, "Invalid coordinates: min_h >= max_h"
assert min_w < max_w, "Invalid coordinates: min_w >= max_w"
assert min_d < max_d, "Invalid coordinates: min_d >= max_d"

# Define the label for the cropped region
label[:, min_h:max_h, min_w:max_w, min_d:max_d] = 1
else:
label = out["label_path"]
center = torch.nonzero(label).float().mean(dim=0)
center = [int(x) for x in center][1:]

# Blend the image and the label
ret = blend_images(image=image, label=label, alpha=0.3, cmap="hsv", rescale_arrays=False)
Expand Down
16 changes: 9 additions & 7 deletions mkdocs.yml
Original file line number Diff line number Diff line change
Expand Up @@ -39,7 +39,8 @@ plugins:
docstring_style: google
options:
# Removed the default filter that excludes private members (that is, members whose names start with a single underscore).
filters: null
filters: null
show_source: true

nav:
- 'index.md'
Expand All @@ -48,12 +49,13 @@ nav:
- 'Cloud Quick Start': 'getting-started/cloud-quick-start.md'
- 'Quick Start': 'getting-started/quick-start.md'
- 'Replication Guide':
- 'Data Download and Preprocessing': 'user-guide/data.md'
- 'Pre-training the FM': 'user-guide/reproduce_fm.md'
- 'Adapt the FM to downstream tasks': 'user-guide/fm_adaptation.md'
- 'Extracting Features & Predictions': 'user-guide/inference.md'
- 'Reproduce Analysis': 'user-guide/analysis.md'
# - 'Training baselines': 'user-guide/reproduce_baselines.md'
- 'Data Download and Preprocessing': 'replication-guide/data.md'
- 'Pre-training the FM': 'replication-guide/reproduce_fm.md'
- 'Adapt the FM to downstream tasks': 'replication-guide/fm_adaptation.md'
- 'Baselines for downstream tasks': 'replication-guide/baselines.md'
- 'Extracting Features & Predictions': 'replication-guide/inference.md'
- 'Reproduce Analysis': 'replication-guide/analysis.md'
# - 'Training baselines': 'replication-guide/reproduce_baselines.md'
- 'Tutorials': https://github.com/AIM-Harvard/foundation-cancer-image-biomarker/tree/master/tutorials
- 'API Reference': 'reference/'

Expand Down
5 changes: 4 additions & 1 deletion scripts/generate_api_reference_pages.py
Original file line number Diff line number Diff line change
Expand Up @@ -18,7 +18,10 @@
root = Path(__file__).parent.parent
src = root / PACKAGE

for path in sorted(src.rglob("*.py")):
# Sort files by depth
paths = sorted(src.rglob("*.py"), key=lambda path: len(path.parts))

for path in paths:
print(f"Processing {path}")
module_path = path.relative_to(src).with_suffix("")

Expand Down
233 changes: 233 additions & 0 deletions tutorials/get_seed_from_mask.ipynb

Large diffs are not rendered by default.

0 comments on commit ef038af

Please sign in to comment.