From 2086db932ed603b81352bc6785e860f72ff57254 Mon Sep 17 00:00:00 2001 From: valentina Date: Sat, 6 Jul 2024 20:55:08 -0700 Subject: [PATCH] exercise --- docs/model_benchmarking.md | 12 ++++++++++++ 1 file changed, 12 insertions(+) diff --git a/docs/model_benchmarking.md b/docs/model_benchmarking.md index 167cfbc..b96eaea 100644 --- a/docs/model_benchmarking.md +++ b/docs/model_benchmarking.md @@ -20,6 +20,18 @@ The workflow which triggers the model evaluation is in [`model_benchmarking.yml` The next workflow follows the steps `create_website_spectrogram` workflow, which converts a notebook [`display_benchmarks`](https://github.com/uwescience/SciPy2024-GitHubActionsTutorial/blob/main/ambient_sound_analysis/display_benchmarks.ipynb) to a website. In this case, we have a very simple notebook which reads all `score_[SHA].csv` and displays a "benchmark table" with the individual entries. This notebook is converted to a webpage ([https://uwescience.github.io/SciPy2024-GitHubActionsTutorial/display_benchmarks.html](https://uwescience.github.io/SciPy2024-GitHubActionsTutorial/display_benchmarks.html/)). +### Exercise + +Create a branch and update the `model_versioning.py` file with a different threshold + +``` +# set threshold +threshold = ?? +``` + +Submit a pull request from this branch to main and monitor the execution of the workflows. Check out the generated website at [https://uwescience.github.io/SciPy2024-GitHubActionsTutorial/display_benchmarks.html](https://uwescience.github.io/SciPy2024-GitHubActionsTutorial/display_benchmarks.html/). + +