From 559c99783611ba1339189d755ed5df574948cc16 Mon Sep 17 00:00:00 2001 From: valentina Date: Sun, 23 Jun 2024 17:42:46 -0700 Subject: [PATCH 01/12] adding config files list --- lesson.md | 17 +++++++++++++++-- 1 file changed, 15 insertions(+), 2 deletions(-) diff --git a/lesson.md b/lesson.md index e61b915..8286c4d 100644 --- a/lesson.md +++ b/lesson.md @@ -4,7 +4,20 @@ 2. [GitHub Actions Python Environment](#github-actions-python-environment-workflow) 3. [Orcasound Spectrogram Visualization Workflow](#orcasound-spectrogram-visualization-workflow) 4. [Exporting Results](#exporting-results) -5. [Scaling Workflows](#scaling-workflows) +5. [Visualizing Results on a Webpage](#visualizing-results-on-a-webpage) +6. [Scaling Workflows](#scaling-workflows) + +All workflow configurations are stored in the [`.github/workflows`](https://github.com/uwescience/SciPy2024-GitHubActionsTutorial/tree/main/.github/workflows) and will go through them in the following order: + +1. [`python_env.yml`](https://github.com/uwescience/SciPy2024-GitHubActionsTutorial/blob/main/.github/workflows/python_env.yml) +2. [`conda_env.yml`](https://github.com/uwescience/SciPy2024-GitHubActionsTutorial/blob/main/.github/workflows/conda_env.yml) +3. [`noise_processing.yml`](https://github.com/uwescience/SciPy2024-GitHubActionsTutorial/blob/main/.github/workflows/noise_processing.yml) +4. [`create_website_spectrogram.yml`](https://github.com/uwescience/SciPy2024-GitHubActionsTutorial/blob/main/.github/workflows/create_website_spectrogram.yml) +5. [`create_website.yml`](https://github.com/uwescience/SciPy2024-GitHubActionsTutorial/blob/main/.github/workflows/create_website.yml) +6. ... + + + # Setup * Fork this repo @@ -67,7 +80,7 @@ We will discuss several different ways to export results. One of the easiest ways to display results is to store them in the GitHub repository. This can be a quick solution, for example, to display a small plot or a table within the `Readme.md` of the repository and update it as the workflow is rerun. This is not a practical solution for big outputs as the GitHub repositories are recommended to not exceed more than 1GB, and all versions of the files will be preserved in the repository's history (thus slowing down cloning). -It is possible to execute all steps to add, commit, and push a file to GitHub, but there is already an [GitHub Auto Commit Action]https://github.com/marketplace/actions/git-auto-commit) to achieve that. +It is possible to execute all steps to add, commit, and push a file to GitHub, but there is already an [GitHub Auto Commit Action](https://github.com/marketplace/actions/git-auto-commit) to achieve that. ![alt text](https://raw.githubusercontent.com/uwescience/SciPy2024-GitHubActionsTutorial/main/img/auto-commit-action.png) From f8497a3bb1f06c7f843dae0ba041e90c6440d890 Mon Sep 17 00:00:00 2001 From: valentina Date: Sun, 23 Jun 2024 18:12:16 -0700 Subject: [PATCH 02/12] renaming to docs --- docs/_config.yml | 32 +++++++ docs/_toc.yml | 7 ++ docs/intro.md | 11 +++ docs/lesson.md | 217 ++++++++++++++++++++++++++++++++++++++++++ docs/logo.png | Bin 0 -> 9854 bytes docs/markdown.md | 55 +++++++++++ docs/references.bib | 56 +++++++++++ docs/requirements.txt | 3 + 8 files changed, 381 insertions(+) create mode 100644 docs/_config.yml create mode 100644 docs/_toc.yml create mode 100644 docs/intro.md create mode 100644 docs/lesson.md create mode 100644 docs/logo.png create mode 100644 docs/markdown.md create mode 100644 docs/references.bib create mode 100644 docs/requirements.txt diff --git a/docs/_config.yml b/docs/_config.yml new file mode 100644 index 0000000..5f534f8 --- /dev/null +++ b/docs/_config.yml @@ -0,0 +1,32 @@ +# Book settings +# Learn more at https://jupyterbook.org/customize/config.html + +title: My sample book +author: The Jupyter Book Community +logo: logo.png + +# Force re-execution of notebooks on each build. +# See https://jupyterbook.org/content/execute.html +execute: + execute_notebooks: force + +# Define the name of the latex output file for PDF builds +latex: + latex_documents: + targetname: book.tex + +# Add a bibtex file so that we can create citations +bibtex_bibfiles: + - references.bib + +# Information about where the book exists on the web +repository: + url: https://github.com/executablebooks/jupyter-book # Online location of your book + path_to_book: docs # Optional path to your book, relative to the repository root + branch: master # Which branch of the repository should be used when creating links (optional) + +# Add GitHub buttons to your book +# See https://jupyterbook.org/customize/config.html#add-a-link-to-your-repository +html: + use_issues_button: true + use_repository_button: true diff --git a/docs/_toc.yml b/docs/_toc.yml new file mode 100644 index 0000000..199ab1a --- /dev/null +++ b/docs/_toc.yml @@ -0,0 +1,7 @@ +# Table of contents +# Learn more at https://jupyterbook.org/customize/toc.html + +format: jb-book +root: intro +chapters: +- file: lesson diff --git a/docs/intro.md b/docs/intro.md new file mode 100644 index 0000000..f8cdc73 --- /dev/null +++ b/docs/intro.md @@ -0,0 +1,11 @@ +# Welcome to your Jupyter Book + +This is a small sample book to give you a feel for how book content is +structured. +It shows off a few of the major file types, as well as some sample content. +It does not go in-depth into any particular topic - check out [the Jupyter Book documentation](https://jupyterbook.org) for more information. + +Check out the content pages bundled with this sample book to see more. + +```{tableofcontents} +``` diff --git a/docs/lesson.md b/docs/lesson.md new file mode 100644 index 0000000..8286c4d --- /dev/null +++ b/docs/lesson.md @@ -0,0 +1,217 @@ +# Overview + +1. [Setup](#setup) +2. [GitHub Actions Python Environment](#github-actions-python-environment-workflow) +3. [Orcasound Spectrogram Visualization Workflow](#orcasound-spectrogram-visualization-workflow) +4. [Exporting Results](#exporting-results) +5. [Visualizing Results on a Webpage](#visualizing-results-on-a-webpage) +6. [Scaling Workflows](#scaling-workflows) + +All workflow configurations are stored in the [`.github/workflows`](https://github.com/uwescience/SciPy2024-GitHubActionsTutorial/tree/main/.github/workflows) and will go through them in the following order: + +1. [`python_env.yml`](https://github.com/uwescience/SciPy2024-GitHubActionsTutorial/blob/main/.github/workflows/python_env.yml) +2. [`conda_env.yml`](https://github.com/uwescience/SciPy2024-GitHubActionsTutorial/blob/main/.github/workflows/conda_env.yml) +3. [`noise_processing.yml`](https://github.com/uwescience/SciPy2024-GitHubActionsTutorial/blob/main/.github/workflows/noise_processing.yml) +4. [`create_website_spectrogram.yml`](https://github.com/uwescience/SciPy2024-GitHubActionsTutorial/blob/main/.github/workflows/create_website_spectrogram.yml) +5. [`create_website.yml`](https://github.com/uwescience/SciPy2024-GitHubActionsTutorial/blob/main/.github/workflows/create_website.yml) +6. ... + + + + +# Setup +* Fork this repo +* Enable Github Actions: + * Settings -> Actions -> Allow actions and reusable workflows + * [Managing Permissions Documentation](https://docs.github.com/en/repositories/managing-your-repositorys-settings-and-features/enabling-features-for-your-repository/managing-github-actions-settings-for-a-repository#managing-github-actions-permissions-for-your-repository) + +# GitHub Actions Python Environment Workflow + +## Installing Packages with `pip` +First, we will run a basic workflow which creates a python environment with a few scientific packages and prints out their version. + +Python Environment Workflow Configuration: +[`.github/workflows/python_env.yml`](https://github.com/uwescience/SciPy2024-GitHubActionsTutorial/blob/main/.github/workflows/python_env.yml) + + +* Go to **Actions** tab +* Click on **Python Environment** +* Click **Run workflow**: this will manually trigger the workflow ([`dispatch_workflow`](https://docs.github.com/en/actions/managing-workflow-runs/manually-running-a-workflow)) +* Click on the newly created run to see the execution progress + + +### Exercise: +Edit [`.github/workflows/python_env.yml`](https://github.com/uwescience/SciPy2024-GitHubActionsTutorial/blob/main/.github/workflows/python_env.yml) to install packages popular in your research. Trigger the workflow to monitor their installation. + + +## Installing Packages with Conda +We can also install packages through conda (instead of `pip`). We will use a `miniconda-setup` action to achieve that easily. + + +Conda Environment Workflow Configuration [`.github/workflows/conda_env.yml`](https://github.com/uwescience/SciPy2024-GitHubActionsTutorial/blob/main/.github/workflows/conda_env.yml) + +# Orcasound Spectrogram Visualization Workflow + +Next, we will demonstrate how GitHub Actions can be used to display a spectrogram for a segment from an underwater audio stream. + +Spectrogram Visualization Workflow: [`.github/workflows/noise_processing.yml`](https://github.com/uwescience/SciPy2024-GitHubActionsTutorial/blob/main/.github/workflows/noise_processing.yml) + +Workflow Steps: + +* Generate spectrogram for a period of time (with `ambient_sound_analysis` package) + * Download data from AWS S3 bucket (in `.ts` format) for a given time period + * Convert many small `.ts` files to one file in `.wav` format + * Generate power spectrogram and store it in `.parquet` format +* Read the power spectrogram in `pandas` dataframe format +* Create plots and save them: `psd.png` and `broadband.png`. +* Upload the `.png` files to GitHub + +After the workflow is executed `psd.png` and `broadband.png`files are updated in the repo and are visualized below. +![alt text](https://raw.githubusercontent.com/uwescience/SciPy2024-GitHubActionsTutorial/main/ambient_sound_analysis/img/psd.png) + +![alt text](https://raw.githubusercontent.com/uwescience/SciPy2024-GitHubActionsTutorial/main/ambient_sound_analysis/img/broadband.png) + + +# Exporting Results + +We will discuss several different ways to export results. + +## Uploading to the GitHub Repository + +One of the easiest ways to display results is to store them in the GitHub repository. This can be a quick solution, for example, to display a small plot or a table within the `Readme.md` of the repository and update it as the workflow is rerun. This is not a practical solution for big outputs as the GitHub repositories are recommended to not exceed more than 1GB, and all versions of the files will be preserved in the repository's history (thus slowing down cloning). + +It is possible to execute all steps to add, commit, and push a file to GitHub, but there is already an [GitHub Auto Commit Action](https://github.com/marketplace/actions/git-auto-commit) to achieve that. + +![alt text](https://raw.githubusercontent.com/uwescience/SciPy2024-GitHubActionsTutorial/main/img/auto-commit-action.png) + + +## Uploading as a GitHub Workflow Artifact + +GitHub provides an option for temporary storage of GitHub Action data as Workflow Artifacts. These are kept on the GitHub website as zipped files and can downloaded within 90 days for public repositories, or 400 days for private repositories. + +There is a GitHub Action which can upload file/s as GitHub Artifacts. + +![alt text](https://raw.githubusercontent.com/uwescience/SciPy2024-GitHubActionsTutorial/main/img/artifact-upload-action.png) + +The artifact can be found by clicking on the workflow run and scrolling down to a section Artifacts. + +![alt text](https://raw.githubusercontent.com/uwescience/SciPy2024-GitHubActionsTutorial/main/img/artifact_github_interface.png) + + +The artifact can be downloaded directly from the interface but also can be downloaded through the GitHub client. + +``` +gh run download +``` + +The workflow run also provides a publicly available link to the download artifact: + +Artifact download URL: [https://github.com/uwescience/SciPy2024- +GitHubActionsTutorial/actions/runs/9591972369/artifacts/1619380017](https://github.com/uwescience/SciPy2024- +GitHubActionsTutorial/actions/runs/9591972369/artifacts/1619380017) + +There is a `download-artifact` action to download the artifacts and share between jobs within a workflow run (note this is limited to the inidividual workflow run, for downloading across runs use the other options). + +[Here](Artifact download URL: https://github.com/uwescience/SciPy2024- +GitHubActionsTutorial/actions/runs/9591972369/artifacts/1619380017) is more detailed documentation on GitHub Artifacts. + + + + +## Uploading to Personal Storage + +A more long-term solution is to store outputs to personal storage. This could be for example Google Drive or a Cloud Provider Object Storage such as an AWS S3 bucket. To have a write access to these storage systems one will need to provide the credential information securely to GitHub Actions. This can be achieved through storing the credential information as Action Secrets. + +The write operation can be performed directly from the Python code or from the GitHub Action configuration. Here will demonstrate how to upload data to Google Drive with `rclone`, a tool for transferring data between storage system which is quite provide agnostic. + +The approach consists of a few steps: + +1. use an `rclone` GitHub Action to avoid installing `rclone` manually + * we will use [AnimMouse/setup-rclone](https://github.com/marketplace/actions/setup-rclone-action) +* configure a Google Drive remote locally +* encode the text in the config file and save it as a secret `RCLONE_CONFIG` + * MacOX: `openssl base64 -in ~/.config/rclone/rclone_drive.conf` +* run the `rclone` command to upload the plots to Google Drive + * `rclone copy ambient_sound_analysis/img/broadband.png mydrive:rclone_uploads/` + + + ![alt txt](https://raw.githubusercontent.com/uwescience/SciPy2024-GitHubActionsTutorial/main/img/rclone_upload.png) + +[Secrets Documentation](https://docs.github.com/en/actions/security-guides/using-secrets-in-github-actions) + + + + +# Visualizing Results on a Webpage + +We saw it is pretty easy to continuously update results on the Readme of the repository. However, sometimes we would like to display them on a website. + + + +We will demonstrate the scenario of converting a Jupyter Notebook to a webpage. + +Notebook: [`plot_noise_levels.ipynb`](https://github.com/uwescience/SciPy2024-GitHubActionsTutorial/blob/main/ambient_sound_analysis/plot_noise_levels.ipynb) + +Create Website with Spectrogram Workflow: [`.github/workflows/create_website_spectrogram.yml`](https://github.com/uwescience/SciPy2024-GitHubActionsTutorial/blob/main/.github/workflows/create_website_spectrogram.yml) + + +The process consists of the following stages: + +* build the website: + * use `nbconvert` to convert the notebook to an html webpage + * `jupyter nbconvert plot_noise_levels.ipynb --execute --to html --output-dir=_build/html --no-input` + * upload the built content as an artifact using the `upload-pages-artifact` action + +* deploy the website (if built successfully): + * configure website with `actions/configure-pages` + * deploy website with `actions/deploy-pages` + +The website can be found here: + +[https://uwescience.github.io/SciPy2024-GitHubActionsTutorial/plot\_noise\_levels.html](https://uwescience.github.io/SciPy2024-GitHubActionsTutorial/plot_noise_levels.html) + + +The procedure is set up to run on `push` thus every time the notebook is updated the website is updated. + +The plots in the notebook use `plotly` and they have interactive features. Those are preserved in the website providing the ability to engage with the data without having to run a notebook. + +The notebook does not display any code which is conventient for showing results to the public. This was achieved by providing the `--no-input` argument to `nbconvert`. We also set the `%capture` magic in the notebook to capture some subprocess output. One can configure this further using cell tags to display content selectively. + + + + + +Other ways: + +* [Jupyterbook](https://jupyterbook.org/en/stable/publish/gh-pages.html) +* [Readthedocs](https://about.readthedocs.com/?ref=readthedocs.com) +* Jekyll template +* Dashboard + + + +# Scaling Workflows + + + + + + + + + + + + + + + + + + + + + + + + diff --git a/docs/logo.png b/docs/logo.png new file mode 100644 index 0000000000000000000000000000000000000000..06d56f40c838b64eb048a63e036125964a069a3a GIT binary patch literal 9854 zcmcJ#_ghoX7cGn*K?4W`P^xr9dX-2=s)7_L0g)Pd2}GoYu8}Goq=uqY=^(vJ3q7D9 zgx&ncihTY58>yQhyHVAq6+lGO~MVagOauq5m9v<`6Yyea8LU7g^33d5#6JIpIaLG z-1|gCJhU3BN``QY-K@=)`@JW9M^t~b7uLWQYei|mDOJQ5UUg0~vIqbGt~C8wn@$P% z5pl~zRjG%ihlB(HjNnDEe-dDz2JixSk(`6?==a`q;3sz5kB=vYkB^Usv)VI9k21rD zJiTz9qguh+nKE8m#+pE4C16PIb7Iqf7uN3q_3Quydk+ycREf|Kaf=g!AT$7Pt5%T^ z8aVDmSdkMNlHc+OU`GfMo(G6M`@b>}|7lC2WNZAX;dG{O$?=D%iOq{^-7HrB zcK+ZB)!$jy15VseoVKDTyWDkjAxOOZ*QX8d#=@20sP;i@Xow95<_tP$?zwjiu96e z>w=n(W1oxV>vfZL+?;M$rHrbr-j~Q4Z0#f{{?B_kL)V~b;Ypj(r`bl+t51=jl6us% zisP0c%+gd*@NjHx-9P?#e$1%)t<{43ZZ7psNeO=)Y*C@keuU`+V-r`LF5ytZC}IBu zbG$h|;({qNsYw+4y*-`g9}w-)-1o;4J-XQ1@Hi(x-*u)|Ne#p@^B%N%Ah2Q;LCG(ZHBQFL?Li-e*gAW=zAB)SnqFmSqA*R7HO(vfNpkV`XWrI=Kemp{ z`XTgmXPPHd1fbknj1MsBI};8fOdfpI;lh4^A@)#qeVu zJb2)IeR*!gAy`Wti~X5b#3bXH#-tDsvNcs{`2~3O>45+@w=lsB@yx}N+7A*AT1iWo zCLk(xWbfP7ppKMjUV$FTMNv+WKG*ZuS~AGj-P2i^ah`gNkqv6jA-Y4>M+bYJ1ouAM zhio_#B1U3u6!)N$?mJ2ZbANVV>Xl|5+3DVVOU$d89?>$dF>ix#X2Zv>SuCqqWS!S> zomY&g)au`g*ER&}`dKnw-|KyL(Xv>>BAvCz#~c7<2z4i2S3Ea{s$tl4;Fmfrw5uQ6 zdZdFouB9Zn$Ox^;m&3&jBs=B z%AGu-+-3>0hu*7*Go%ig0F*a+}bQ z8Cc)R_4Xa}T)gvvu4H@GAS#AA)o|_JsNW8zdTY`Y2EMw$8M6iKf6zLo2}vX5#E`E8 zLfTXySbqo;)cm*vz`Y<&q8-QUpyBxlw(wEG~e8ar+}fF-S+sb& zDz})rHK~=qsT-W;2Plhi{U0Y!6S$rmj%Lf#_6Q{}o9ON=pa2rAPpn&9&e(qYe-t*V z@vGCn-F!I$@0)jPw(#1-v_r^JT~5YZW{`r1+085#GB%6IJC?RRv%5q~F>%c&OxynZ z%xYjt7MVY0BPNAf>4{^p{XbrcwEclTApV;6FC515Ns#UfLZ z2YrA=|5>ly8F0Bp+l*6!<>1heHsIR01D`C08YNKz!~*JpVLU<@0QpHnTUW{;>sYp= z#eU02VX*}f3vp`~7iJXDzx9yXd^Up*0-yHrbarsvW*UH%P49|4$4@)tJgR$?6o6f5 z(}}v|BxI|mgea@2>qg^b#ozLf17HU@5Fa-Fraz@QN%7lZ5lq)Vn7Q=hZ-RdZ+wFlD zJdw!7J1*GND#_)ys*=%^U*Zr0;^&s<^_q%ds6d3-IC~_amnNV&0xM^1;`=@1xFn zORQ(tnI35sf7lD%W&%N9E6Y~6qoNr#BAw2aiDiR!7CS8KoW|9)GoJAAS##ZIrQTUl zBW}q?kb$Tk4JP=7j@W0(Ea^R`$D>gU%nfa^3Dwub5~JS+2Q@c7Z9!yGJ7`P^5kIj$ zg3O{je@?Kt&v;;R&~$K48WR=L6GcxNIc4yw)BgJ8Bb7oLJ2X_3X0F)>>o%A^0m7n(dq+!+P%cBi)cl|I6CzcZF{kC1jgSv|j(+zAw~tn#IxO6lUd zj3V;e_C=~at8H=>ZGE#9<8QfP>BH9b3(Dp1zuXnN-uc&csJ#%8Q4@!2to4XneWRff zIaA{h=OO9fyAt`B20gSGZE0+5EGtCJ!AS6zFb)b4RvV0{cMY(`Y<6f+_ign{;I9w2 z?`CxPvQXRE34SVHT0>{c&%(Dx6)wu&)H)_KU+lGLizO9h`wd3$Z?JRA2VVyyEvYkH zj67X5JX#+y_;{BJ&ZZ+qCX?k*~w*V_4;9w2D`5E~7T0 zi3~~~jxu(te?CA<_w_{5#uzKO&O9+lj}F{x+F-4r%K9((FwF&kFVse6mP!vDt_>x9 zUx>VaMt_HzSdkN>rs`F|pR*{TM8rR(unZk0EXqrLLqyw#%|A#KhSQVT6e&4@$gbX?Lg3SMXEj8wDG)D;FDQ8q8%^qqz=<=X1rcVpN?A{w?-*WMkRlJWw z3+-+`iUdC0c&rucqhLSGU;|L-v$MoCUgN^3b8#YnlqTL^wOuSldYIkFOhc9*s!|7C zZCfJ4{p-Vci9_o*vV1IliLv_qg!ONlaCFU*O(&by>`lq|I z4hpOFuCt)IyVtJs&2^hgfhWI>P3B?isG8c+o4E?=CTZWp{P7tyGpzM%&=GR+HEzQq z;QD++XUMPpY$fV5j+c40`CQ?TIK@slTaYNrn;_-|+;Pj|71}f4JSG4)@AI{Y``0;x zLIAw$!n>UCW{q*q4}sT5IX7vGytw4;e6H4@D|~;@%gt~92mAUO5uvgxOB5{Ep(ELz z2y^2geQ@Ae;wJm&>(%ce!yCWuih%7rn$vb`hZwIICxbe)vwUsJ`2COHc=-h!)ol1J zae_fdeqQ#!4Z#;G@7#18om~t^Dkw@;k|8CYnl4`WYjT=O>;V$I*8CVeKahu}oThzI zby8mM|V!o$r$h~2x@YYgecvv)44 zVEmn@-71;kO#&Fe!dj}OTkMCigz^2|hDFfuhsUY!IpqW^Rup!q8D*X-)f`dZYB%V( z+J*fl9B5WrGvpO-E^E$NZT%lAFfaIUD6e?hS2V7Wc__#^DX?+UE*yzRce_CIV*Ig- z{@Au>EMGnM(+{WpNRX7+Z+dydzM}QZc1KP7jO|yav-WKD37#31CiIdmppsvasoZjJ zhjMmpSbHGVq~5PpJY6V>>3^|%q!?r@6j>j<0Q>N?Q0ng{%+Hv@V2buU<19&o4fYJ3 zRGPrfikVAIeWWMdn<`28#Px;wwVB2fJsK(!YG{yS2zy&s*m4tR82lID$;!4LN{)!y zpw)6_si`@A8LBejn^g}GjhqlZDAg{@exDRYNd-JHnKP1SG{R>+AL4dGabfD5bgVHmK*ej73ye~XLd@pb%2zq%8KX$Hu7^Z0-YJ;>P` zMz#ko5|<$pp_sI$-oYj5Rh=9)S^tdB2NesZ#!D?}dqn52D&U`j+g9hxWVkkYBdn4% zc1N30W|e88l8+P(9tA)ET&$81!y6FlDYdS0di~KK=i%a0-3?AToyPe^X?C+|Opi&` zbYC5`m0#x7y;R^{@3?t`@KHC#yOZy27qg$X^7DX*k*Y3=r*l?48H^0m@Zwspa2w!= z8Rzo~D@*^~I(w;37S?CAu65^aOXttu1t0tLL~jVQR1W5BCjS95x7_>(Zn95tLXtC* zAm8pA%*Ozxz$w46`Mo9f*vB&x+b$=u+Rs;z6TfMxU!%gWE+ByCzxygTL4BDhyv1d} zD{w^)?0bgm#ph9M!q4%-kFW6k$paVLINgXgd}#yNe44b#J%%(m=NxxGP{<*vw)MiW z6(l~^rYnHK%O&5WXN!98Ny<2ab6T^H9CrHXik+$sMot2o2Lo_hnsKuJwz^8h{%eED zr2qYWAmxXK|7vS?)YGY#yL&+^&tb?CV%7@vu~e0v#UWvx>Telu)*eDs26wvKAAXce ztkMG*S4qdZc!ua^$*k4(E6U{?$oIskaZkqURK+y3u8q`gKrVl;*Kr~!{`;%OR=SeB ztg)-z=sVw9%gUM?9nbAWb9|%m;w8yNvf{LmJDY1OcB@=q+=4Avtsm1NlJkLj{9Zl{ zRC#RkkoY@`MSlwW&pUEARg2XK0BFF<0#Y;GEnlJE-CR#m7hqrV%3DU|i@%R^k^PBt zL9=sbeO)J@Xjbkq>qG>iL5LcW_dHMcNq-6n&mRKy6!O?ZPQK4yWR5ho z{xPlMGn^?DBvWZmb@A?AY_C}N(gWxoy$S=QS5eTZZNYtHt5=3oKWhj+ zaI1UQC{CL^LFo3hB0Yv-r9m2lrMlI5bVANzvQRAEKf+Mm4kO*IX;k1Odq94dXD2Rs zWX}=%8D2#S>f=WSng80ZNRQ|oV9VtCLzT;8yEMCSo9+)@Kf$MS9rA)R#TWwx9jD+W zi=Qw0Y3I9G%tn(aT>or~nGrp+uA%epd$M7pH7C*qpS6v-koOaxCl@1mMC(q!bC(s) zUgY#LBuJW)rT0r8sRU(ABiG>{p%6yPkp?S?iJpV=r}PnKq7z9&)n=Xca+&dPn@b(& ze@Ry7r&SX0G7$e$1tfQ_eZVwx$s~3PWZ`c=&{$USD7g>1-9MIsOBgfi&|QFmfGN66 z9#c7bCn;+>Nxde;IuKZxgr+*ZjS~=78Slptsx0B zC(yjsMZl}YK{pqR$m-cI5O-qwWr{63uC0&=YTvT9y zqo$r(5f9TobpUqSOOL1pntof&8#PdLxxmJ+JLjGv#)W(sdt|m&Pg~Ei>X{9WRM-FL z1ug=$A>CFfx;j-KXvS4L_(V+6QyE%^N?!-xBP2BBQ&_ebDIcw^RMMR1W|7&hYIg>2|Km|AZ``=`r~-Lr_^!%2k1B)uvrP6qZ4d`6mPBN>!xZ zJk**bYPy$w%2j60-W`={AU!!oHcBpiucFvTp~*34H)rMJxvaCFizQ$>@9ORE9?hrY z_T-JG%a{$#O{{Y(Q@I#^@Irxli=@R7a-z_}8Dgl&lu4==oQh=QPkJP(*KtS8U5075dF7 zk<6-UO^#Ci_8qvBD}L=^EXWWqC7Ki;}V;^B#BGlT;7cAwRaC& zbWyAQj{@gNy2Gix);R!v_O^)R%4Ip-S@)aIy3x`iPB(2_!mn0a%vs>k4@ENe8|dXK zHIjH9)!DsyQ_armEk(s_CL(LDUW*)7qrAO%P<3Cqijj$(X$*6}#_C8^v1VmCWJ3)T z|NZ4>M2bedT9pKY4Q7bvkLx|;@_%6ztsBvH1rl^a`}A}Jw($PS)0VjqRNGVbvSsj1 zeq5c?zFG;a$eXVSb{-Qhrt$Ln4+G5@1M_i1Fd>I!(Z%Ryk{|<_Z0^nWo_yDc#qZRN zW*Uzoe6ISr;?i&$?@WdN7*w?_e&Bt%7FO_$#Pp-o%+Bmb?YO52TFJ_X=Y` zB79{;AGE8w(H+`M-ReL&@+8qpOiubk(5AfNWE$Iqg?mQ0-sS zlV5}N6=QWM-DmG5T$?m-d0zL9tvkD61+==-ZvvE}&!_HEa);T%|5iUNQgr??Arj2v zYeVDHiT2)^j`CqWEdiHi8rN_or|xDgsM^V2=a8S%L1RY)zkF1Bo+rlV-5C~88P38p z>}G^G6k1xQ5BXO3n!~$(MW8-5Y&VdjIRXZ``{U*C+40wek?4&Psi$FSNhg8N zU>DZ?ITJ2d!p5txmKpAB-_hjqgY3%%Myhw3#rRoHotWKDxD&LqP=@Yh^miCTC#uU( z=E!Jmu<%zp1u}I+JV)@$hXbDq!nQdddw1iD+p{!H2R#miIqW_TAeZvSG>?CwQDl<= zq&r!EMjYv>v=zr(%WfQ4B|0sk7UEOk!}O@D@vX9{>t{X+!7w~tYw!I{;N8C%TN>!M z>7xX?(GH%vZg|!Xp4X8E(FQ-T=0g3Wlt`Tf|0;>yF&gVyM`s}om7?7plxL!q;@a!V z{l4{qolEEroMw2oJNmp@)G2ljpK|T#-8cW*{bS2K$Q!%h%5Qx>Yp}*|3=`1IrGYA_ z#3pFJc%b*?vw(G9JA{OJs6Iu?03Z#!9|dMV4WKd?L9VWXkJ|UY=bg3AH;sIrwQD;I zc&6=zrtXy=AOQHjS}Aa=9}Kkvkysnn7oOmLzpHuu&^6N3?IQXpetcqqXIvVbh9q;e zZ*y5xQ0j@_UR5}&q#}Q}_z?g~1NZ0~DoWtg{a2c3{5#i|(VB{zs7lh{ns-0}39+eZ zmuodiGS}9p!Cll;j;+GMld@E%{}vT(vQ^7~sZyUydd{}xs*Gvp`d6JIiVu(*_EN9q z$Qn5T>zL^jWs2J*dS)WbaTymtJNWt7SCtZNBxs!#HlJZ0Xj4IzF#0FmKaa9WFj*t^ z4$IrEQwQer_l3e3LFZ0uR?w~Qj8tBUW5aL8_8yvFiQH_QgmCjr9apD6&DIQX(SNWv zb|F@%eNq*cn1*MdhziznN^XqbD54Rd`f42e z%dkRxE0pw|k;g*Kxz=~~Q^d%iQS|mdZfV%PFuTgKOoQQ2B*Db7CQ{U+%(vqjr2@+)eLf{cZscW-ddJS@qnyU6i*!6DF%fms`AZv@>Z`K>M^jl7)Jy^-; z(0WW!4OGD=qRpx%OsLesm*9)M|LH&G?IjJgE9N?vI~25Tc)^B;`$p7s5P+z)rqsu8 z#LN*-*k4E7rlzJtJp0n5I7ds<0+d9DE{E_46|Ejr244-$k>h0krj6YhP4oX0jS7JghDO3@cFKC#AZ`8L30t0U8)M^2*m&M6}$Qp~Q_wmx(G zn(!n3Y)O0=4MK=5P0>QQfvJ@cAL_pnWzfF4g)K|wu=I~FYX(3W-2 zf|@^YU!Ut&*_K+D;p+qF5BVv9?k(CLxvwrM?*$1Y8Q1rO%po-&x{`*9F<>gKc8C(qtBR&?*4F>(;9=xmQap z#=6YaIOw962LhIK=$)s(7ibVg-6j|qt}Gf?spXuC?|60jPfUv_x01+;B7RU=)jPnT zh|?|SxQAbf65$EmPTvdZzt8N+PABxnwrkm)Y?Tga)nYIMgoc7w4XzRV2$`%ex6z9bNU zTv2t^8?QF3hskl_jm7(RNCWicsx?yw57HN%DNW%~m{)QN=KZ8m6{!i#T2h!Xg3@O2 zaAK4htobm}2YP9pB2afR<%OFoY%oDA4Bs?^mtDV=I(pY}zRp|(4qBFfrFG}vZP7x$ zMB$pC$#-tpQDWYIddI0^+71N$Q1U1_4^iys=?2}s!KPO2!JcD*HVg#F{oEWIe*nU-%yV<;YJz}09_`Dz?AWLlKA$!=5B7tkp z9_Ihe&3$NL+wtF@TpDvLR)ihayRpLvH0}o-Zvte|uU@(+0lWT*aU3ZK?GKTLYcJCO zQ~U7QT6{r3c?3TzU|gX^V}%M%XI;xd_qK-&M9KdV1Sos|8}%17JACDbM!iDforMt* z7Auxhgv1PQq~6FDWuVifhd=4`Vl2Xr#vI%}S{cQRAwGNOF9zF0RTH&w_q#Z%t4 zGx*92|7e3Cd$W~Y2_l4SU!I)$Ol%$mL(dkfgnhfk3#n<-t!kX(JFLhS_|%>=Fst7u zheXTT)Vo^bsf*`klEo@*>S0f;%3gz^+m_^rcv(QLan(?cKx9RG{g^Ez;QtBl6Ph+saou0`c!| zI8XcO^OnR(X{|3YvOxJr%@#4@tTR!0O7_qzZS`-=iZ3mwL3@LwASi z(QvV}`KN5>8$=N+@p9IR8VfQdvSZ+Lf{Fv;$%v%_0s%+RBoafg; zB^upVY;ewq1QLHeD4y<6Oa4d6hgbOmM;(hw8Y(3@UM)XV_nC91+I{svghGcwL9DQC zyM&5PhT=%Y7C{j)JyC3slsF1h)J!2ju6cNc?HhXJLHl25entk$v!XYOzK=(rP~Rh! z*p(T?yl4h)l~Mkkoa1>4%y{DERdSd$UE=vGXLs>#A3q)Cuz$F)ey2S&b!HYf=Mm@i z$vBfjX|diF=}~}Se`0d%u`^s!Jiz*S>KK$b5mKnTyL{tReI0e>|9%tuAFB^Xu1EqI z2*~6xoM8t-esZMcNE3x1OqyN-Lp+FtX23bZdIhv1I_L3poeEE12w+x`r3BtK?UgS_ zgjv%6!;CN9dDzM}v3-DxU~SIgW9;-IvqR%OnBm4Ejqg0G}YnQC~*J|RZ=pB5-~vu$W~ z)Un>6^zlydKLzhIZ?b;=zuG68MC1PzKM`{<{lBe>Vh5EBTCLDuB`X-584ml0{G L>8MsHTOs~GC;Fmw literal 0 HcmV?d00001 diff --git a/docs/markdown.md b/docs/markdown.md new file mode 100644 index 0000000..faeea60 --- /dev/null +++ b/docs/markdown.md @@ -0,0 +1,55 @@ +# Markdown Files + +Whether you write your book's content in Jupyter Notebooks (`.ipynb`) or +in regular markdown files (`.md`), you'll write in the same flavor of markdown +called **MyST Markdown**. +This is a simple file to help you get started and show off some syntax. + +## What is MyST? + +MyST stands for "Markedly Structured Text". It +is a slight variation on a flavor of markdown called "CommonMark" markdown, +with small syntax extensions to allow you to write **roles** and **directives** +in the Sphinx ecosystem. + +For more about MyST, see [the MyST Markdown Overview](https://jupyterbook.org/content/myst.html). + +## Sample Roles and Directives + +Roles and directives are two of the most powerful tools in Jupyter Book. They +are like functions, but written in a markup language. They both +serve a similar purpose, but **roles are written in one line**, whereas +**directives span many lines**. They both accept different kinds of inputs, +and what they do with those inputs depends on the specific role or directive +that is being called. + +Here is a "note" directive: + +```{note} +Here is a note +``` + +It will be rendered in a special box when you build your book. + +Here is an inline directive to refer to a document: {doc}`markdown-notebooks`. + + +## Citations + +You can also cite references that are stored in a `bibtex` file. For example, +the following syntax: `` {cite}`holdgraf_evidence_2014` `` will render like +this: {cite}`holdgraf_evidence_2014`. + +Moreover, you can insert a bibliography into your page with this syntax: +The `{bibliography}` directive must be used for all the `{cite}` roles to +render properly. +For example, if the references for your book are stored in `references.bib`, +then the bibliography is inserted with: + +```{bibliography} +``` + +## Learn more + +This is just a simple starter to get you started. +You can learn a lot more at [jupyterbook.org](https://jupyterbook.org). diff --git a/docs/references.bib b/docs/references.bib new file mode 100644 index 0000000..783ec6a --- /dev/null +++ b/docs/references.bib @@ -0,0 +1,56 @@ +--- +--- + +@inproceedings{holdgraf_evidence_2014, + address = {Brisbane, Australia, Australia}, + title = {Evidence for {Predictive} {Coding} in {Human} {Auditory} {Cortex}}, + booktitle = {International {Conference} on {Cognitive} {Neuroscience}}, + publisher = {Frontiers in Neuroscience}, + author = {Holdgraf, Christopher Ramsay and de Heer, Wendy and Pasley, Brian N. and Knight, Robert T.}, + year = {2014} +} + +@article{holdgraf_rapid_2016, + title = {Rapid tuning shifts in human auditory cortex enhance speech intelligibility}, + volume = {7}, + issn = {2041-1723}, + url = {http://www.nature.com/doifinder/10.1038/ncomms13654}, + doi = {10.1038/ncomms13654}, + number = {May}, + journal = {Nature Communications}, + author = {Holdgraf, Christopher Ramsay and de Heer, Wendy and Pasley, Brian N. and Rieger, Jochem W. and Crone, Nathan and Lin, Jack J. and Knight, Robert T. and Theunissen, Frédéric E.}, + year = {2016}, + pages = {13654}, + file = {Holdgraf et al. - 2016 - Rapid tuning shifts in human auditory cortex enhance speech intelligibility.pdf:C\:\\Users\\chold\\Zotero\\storage\\MDQP3JWE\\Holdgraf et al. - 2016 - Rapid tuning shifts in human auditory cortex enhance speech intelligibility.pdf:application/pdf} +} + +@inproceedings{holdgraf_portable_2017, + title = {Portable learning environments for hands-on computational instruction using container-and cloud-based technology to teach data science}, + volume = {Part F1287}, + isbn = {978-1-4503-5272-7}, + doi = {10.1145/3093338.3093370}, + abstract = {© 2017 ACM. There is an increasing interest in learning outside of the traditional classroom setting. This is especially true for topics covering computational tools and data science, as both are challenging to incorporate in the standard curriculum. These atypical learning environments offer new opportunities for teaching, particularly when it comes to combining conceptual knowledge with hands-on experience/expertise with methods and skills. Advances in cloud computing and containerized environments provide an attractive opportunity to improve the effciency and ease with which students can learn. This manuscript details recent advances towards using commonly-Available cloud computing services and advanced cyberinfrastructure support for improving the learning experience in bootcamp-style events. We cover the benets (and challenges) of using a server hosted remotely instead of relying on student laptops, discuss the technology that was used in order to make this possible, and give suggestions for how others could implement and improve upon this model for pedagogy and reproducibility.}, + booktitle = {{ACM} {International} {Conference} {Proceeding} {Series}}, + author = {Holdgraf, Christopher Ramsay and Culich, A. and Rokem, A. and Deniz, F. and Alegro, M. and Ushizima, D.}, + year = {2017}, + keywords = {Teaching, Bootcamps, Cloud computing, Data science, Docker, Pedagogy} +} + +@article{holdgraf_encoding_2017, + title = {Encoding and decoding models in cognitive electrophysiology}, + volume = {11}, + issn = {16625137}, + doi = {10.3389/fnsys.2017.00061}, + abstract = {© 2017 Holdgraf, Rieger, Micheli, Martin, Knight and Theunissen. Cognitive neuroscience has seen rapid growth in the size and complexity of data recorded from the human brain as well as in the computational tools available to analyze this data. This data explosion has resulted in an increased use of multivariate, model-based methods for asking neuroscience questions, allowing scientists to investigate multiple hypotheses with a single dataset, to use complex, time-varying stimuli, and to study the human brain under more naturalistic conditions. These tools come in the form of “Encoding” models, in which stimulus features are used to model brain activity, and “Decoding” models, in which neural features are used to generated a stimulus output. Here we review the current state of encoding and decoding models in cognitive electrophysiology and provide a practical guide toward conducting experiments and analyses in this emerging field. Our examples focus on using linear models in the study of human language and audition. We show how to calculate auditory receptive fields from natural sounds as well as how to decode neural recordings to predict speech. The paper aims to be a useful tutorial to these approaches, and a practical introduction to using machine learning and applied statistics to build models of neural activity. The data analytic approaches we discuss may also be applied to other sensory modalities, motor systems, and cognitive systems, and we cover some examples in these areas. In addition, a collection of Jupyter notebooks is publicly available as a complement to the material covered in this paper, providing code examples and tutorials for predictive modeling in python. The aimis to provide a practical understanding of predictivemodeling of human brain data and to propose best-practices in conducting these analyses.}, + journal = {Frontiers in Systems Neuroscience}, + author = {Holdgraf, Christopher Ramsay and Rieger, J.W. and Micheli, C. and Martin, S. and Knight, R.T. and Theunissen, F.E.}, + year = {2017}, + keywords = {Decoding models, Encoding models, Electrocorticography (ECoG), Electrophysiology/evoked potentials, Machine learning applied to neuroscience, Natural stimuli, Predictive modeling, Tutorials} +} + +@book{ruby, + title = {The Ruby Programming Language}, + author = {Flanagan, David and Matsumoto, Yukihiro}, + year = {2008}, + publisher = {O'Reilly Media} +} diff --git a/docs/requirements.txt b/docs/requirements.txt new file mode 100644 index 0000000..7e821e4 --- /dev/null +++ b/docs/requirements.txt @@ -0,0 +1,3 @@ +jupyter-book +matplotlib +numpy From 433f80fcc347e6846cb04c332d204566abf5bba7 Mon Sep 17 00:00:00 2001 From: valentina Date: Sun, 23 Jun 2024 18:19:10 -0700 Subject: [PATCH 03/12] remove references --- docs/_config.yml | 9 --------- 1 file changed, 9 deletions(-) diff --git a/docs/_config.yml b/docs/_config.yml index 5f534f8..7284d30 100644 --- a/docs/_config.yml +++ b/docs/_config.yml @@ -10,15 +10,6 @@ logo: logo.png execute: execute_notebooks: force -# Define the name of the latex output file for PDF builds -latex: - latex_documents: - targetname: book.tex - -# Add a bibtex file so that we can create citations -bibtex_bibfiles: - - references.bib - # Information about where the book exists on the web repository: url: https://github.com/executablebooks/jupyter-book # Online location of your book From b7c7ff9786cf5b4077ec5bdb5b39576770f9e20c Mon Sep 17 00:00:00 2001 From: valentina Date: Sun, 23 Jun 2024 18:31:02 -0700 Subject: [PATCH 04/12] changing to branch and removing references --- docs/_config.yml | 2 +- docs/references.bib | 56 --------------------------------------------- 2 files changed, 1 insertion(+), 57 deletions(-) delete mode 100644 docs/references.bib diff --git a/docs/_config.yml b/docs/_config.yml index 7284d30..63b8392 100644 --- a/docs/_config.yml +++ b/docs/_config.yml @@ -14,7 +14,7 @@ execute: repository: url: https://github.com/executablebooks/jupyter-book # Online location of your book path_to_book: docs # Optional path to your book, relative to the repository root - branch: master # Which branch of the repository should be used when creating links (optional) + branch: lesson_content # Which branch of the repository should be used when creating links (optional) # Add GitHub buttons to your book # See https://jupyterbook.org/customize/config.html#add-a-link-to-your-repository diff --git a/docs/references.bib b/docs/references.bib deleted file mode 100644 index 783ec6a..0000000 --- a/docs/references.bib +++ /dev/null @@ -1,56 +0,0 @@ ---- ---- - -@inproceedings{holdgraf_evidence_2014, - address = {Brisbane, Australia, Australia}, - title = {Evidence for {Predictive} {Coding} in {Human} {Auditory} {Cortex}}, - booktitle = {International {Conference} on {Cognitive} {Neuroscience}}, - publisher = {Frontiers in Neuroscience}, - author = {Holdgraf, Christopher Ramsay and de Heer, Wendy and Pasley, Brian N. and Knight, Robert T.}, - year = {2014} -} - -@article{holdgraf_rapid_2016, - title = {Rapid tuning shifts in human auditory cortex enhance speech intelligibility}, - volume = {7}, - issn = {2041-1723}, - url = {http://www.nature.com/doifinder/10.1038/ncomms13654}, - doi = {10.1038/ncomms13654}, - number = {May}, - journal = {Nature Communications}, - author = {Holdgraf, Christopher Ramsay and de Heer, Wendy and Pasley, Brian N. and Rieger, Jochem W. and Crone, Nathan and Lin, Jack J. and Knight, Robert T. and Theunissen, Frédéric E.}, - year = {2016}, - pages = {13654}, - file = {Holdgraf et al. - 2016 - Rapid tuning shifts in human auditory cortex enhance speech intelligibility.pdf:C\:\\Users\\chold\\Zotero\\storage\\MDQP3JWE\\Holdgraf et al. - 2016 - Rapid tuning shifts in human auditory cortex enhance speech intelligibility.pdf:application/pdf} -} - -@inproceedings{holdgraf_portable_2017, - title = {Portable learning environments for hands-on computational instruction using container-and cloud-based technology to teach data science}, - volume = {Part F1287}, - isbn = {978-1-4503-5272-7}, - doi = {10.1145/3093338.3093370}, - abstract = {© 2017 ACM. There is an increasing interest in learning outside of the traditional classroom setting. This is especially true for topics covering computational tools and data science, as both are challenging to incorporate in the standard curriculum. These atypical learning environments offer new opportunities for teaching, particularly when it comes to combining conceptual knowledge with hands-on experience/expertise with methods and skills. Advances in cloud computing and containerized environments provide an attractive opportunity to improve the effciency and ease with which students can learn. This manuscript details recent advances towards using commonly-Available cloud computing services and advanced cyberinfrastructure support for improving the learning experience in bootcamp-style events. We cover the benets (and challenges) of using a server hosted remotely instead of relying on student laptops, discuss the technology that was used in order to make this possible, and give suggestions for how others could implement and improve upon this model for pedagogy and reproducibility.}, - booktitle = {{ACM} {International} {Conference} {Proceeding} {Series}}, - author = {Holdgraf, Christopher Ramsay and Culich, A. and Rokem, A. and Deniz, F. and Alegro, M. and Ushizima, D.}, - year = {2017}, - keywords = {Teaching, Bootcamps, Cloud computing, Data science, Docker, Pedagogy} -} - -@article{holdgraf_encoding_2017, - title = {Encoding and decoding models in cognitive electrophysiology}, - volume = {11}, - issn = {16625137}, - doi = {10.3389/fnsys.2017.00061}, - abstract = {© 2017 Holdgraf, Rieger, Micheli, Martin, Knight and Theunissen. Cognitive neuroscience has seen rapid growth in the size and complexity of data recorded from the human brain as well as in the computational tools available to analyze this data. This data explosion has resulted in an increased use of multivariate, model-based methods for asking neuroscience questions, allowing scientists to investigate multiple hypotheses with a single dataset, to use complex, time-varying stimuli, and to study the human brain under more naturalistic conditions. These tools come in the form of “Encoding” models, in which stimulus features are used to model brain activity, and “Decoding” models, in which neural features are used to generated a stimulus output. Here we review the current state of encoding and decoding models in cognitive electrophysiology and provide a practical guide toward conducting experiments and analyses in this emerging field. Our examples focus on using linear models in the study of human language and audition. We show how to calculate auditory receptive fields from natural sounds as well as how to decode neural recordings to predict speech. The paper aims to be a useful tutorial to these approaches, and a practical introduction to using machine learning and applied statistics to build models of neural activity. The data analytic approaches we discuss may also be applied to other sensory modalities, motor systems, and cognitive systems, and we cover some examples in these areas. In addition, a collection of Jupyter notebooks is publicly available as a complement to the material covered in this paper, providing code examples and tutorials for predictive modeling in python. The aimis to provide a practical understanding of predictivemodeling of human brain data and to propose best-practices in conducting these analyses.}, - journal = {Frontiers in Systems Neuroscience}, - author = {Holdgraf, Christopher Ramsay and Rieger, J.W. and Micheli, C. and Martin, S. and Knight, R.T. and Theunissen, F.E.}, - year = {2017}, - keywords = {Decoding models, Encoding models, Electrocorticography (ECoG), Electrophysiology/evoked potentials, Machine learning applied to neuroscience, Natural stimuli, Predictive modeling, Tutorials} -} - -@book{ruby, - title = {The Ruby Programming Language}, - author = {Flanagan, David and Matsumoto, Yukihiro}, - year = {2008}, - publisher = {O'Reilly Media} -} From cdc956ff5b81c9d61d3131f20cc330440743709c Mon Sep 17 00:00:00 2001 From: valentina Date: Sun, 23 Jun 2024 18:47:26 -0700 Subject: [PATCH 05/12] adding deploy book action --- .github/workflows/deploy-book.yml | 60 +++++++++++++++++++++++++++++++ 1 file changed, 60 insertions(+) create mode 100644 .github/workflows/deploy-book.yml diff --git a/.github/workflows/deploy-book.yml b/.github/workflows/deploy-book.yml new file mode 100644 index 0000000..cc2b1dc --- /dev/null +++ b/.github/workflows/deploy-book.yml @@ -0,0 +1,60 @@ +name: deploy-book + +# Run this when the master or main branch changes +on: + push: + branches: + - master + - main + # If your git repository has the Jupyter Book within some-subfolder next to + # unrelated files, you can make this run only if a file within that specific + # folder has been modified. + # + # paths: + # - some-subfolder/** + +# This job installs dependencies, builds the book, and pushes it to `gh-pages` +jobs: + deploy-book: + runs-on: ubuntu-latest + permissions: + pages: write + id-token: write + steps: + - uses: actions/checkout@v3 + + # Install dependencies + - name: Set up Python 3.11 + uses: actions/setup-python@v4 + with: + python-version: 3.11 + + - name: Install dependencies + run: | + pip install -r jupyter-book + + # (optional) Cache your executed notebooks between runs + # if you have config: + # execute: + # execute_notebooks: cache + # - name: cache executed notebooks + # uses: actions/cache@v3 + # with: + # path: _build/.jupyter_cache + # key: jupyter-book-cache-${{ hashFiles('requirements.txt') }} + + # Build the book + - name: Build the book + run: | + jupyter-book build . + + # Upload the book's HTML as an artifact + - name: Upload artifact + uses: actions/upload-pages-artifact@v2 + with: + path: "_build/html" + + # Deploy the book's HTML to GitHub Pages + - name: Deploy to GitHub Pages + id: deployment + uses: actions/deploy-pages@v2 From b22300a280f5e6a860b56e7de4b02afaced8ea85 Mon Sep 17 00:00:00 2001 From: valentina Date: Sun, 23 Jun 2024 18:48:57 -0700 Subject: [PATCH 06/12] deploy from branch --- .github/workflows/deploy-book.yml | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/.github/workflows/deploy-book.yml b/.github/workflows/deploy-book.yml index cc2b1dc..f93fdeb 100644 --- a/.github/workflows/deploy-book.yml +++ b/.github/workflows/deploy-book.yml @@ -4,8 +4,8 @@ name: deploy-book on: push: branches: - - master - main + - lesson_content # If your git repository has the Jupyter Book within some-subfolder next to # unrelated files, you can make this run only if a file within that specific # folder has been modified. From 077dcb3c0777a622dd8c5d1d10bcd3085b9da6bd Mon Sep 17 00:00:00 2001 From: valentina Date: Sun, 23 Jun 2024 18:50:41 -0700 Subject: [PATCH 07/12] deploy from branch --- .github/workflows/deploy-book.yml | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/.github/workflows/deploy-book.yml b/.github/workflows/deploy-book.yml index f93fdeb..5df1d08 100644 --- a/.github/workflows/deploy-book.yml +++ b/.github/workflows/deploy-book.yml @@ -31,7 +31,7 @@ jobs: - name: Install dependencies run: | - pip install -r jupyter-book + pip install jupyter-book # (optional) Cache your executed notebooks between runs # if you have config: From d525fc3d6f8e3b2432f39e4abd61b320f726f878 Mon Sep 17 00:00:00 2001 From: valentina Date: Sun, 23 Jun 2024 18:53:52 -0700 Subject: [PATCH 08/12] setting path --- .github/workflows/deploy-book.yml | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/.github/workflows/deploy-book.yml b/.github/workflows/deploy-book.yml index 5df1d08..6bb436d 100644 --- a/.github/workflows/deploy-book.yml +++ b/.github/workflows/deploy-book.yml @@ -46,7 +46,7 @@ jobs: # Build the book - name: Build the book run: | - jupyter-book build . + jupyter-book build docs/. # Upload the book's HTML as an artifact - name: Upload artifact From 5fec052b7b9b7687ad5498a53af739a8e8e455a8 Mon Sep 17 00:00:00 2001 From: valentina Date: Sun, 23 Jun 2024 19:00:13 -0700 Subject: [PATCH 09/12] setting path --- .github/workflows/deploy-book.yml | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/.github/workflows/deploy-book.yml b/.github/workflows/deploy-book.yml index 6bb436d..8034b15 100644 --- a/.github/workflows/deploy-book.yml +++ b/.github/workflows/deploy-book.yml @@ -52,7 +52,7 @@ jobs: - name: Upload artifact uses: actions/upload-pages-artifact@v2 with: - path: "_build/html" + path: "docs/_build/html" # Deploy the book's HTML to GitHub Pages - name: Deploy to GitHub Pages From 807dcc19d1b8a584f975943e87cc8684029206f7 Mon Sep 17 00:00:00 2001 From: valentina Date: Mon, 24 Jun 2024 04:07:39 -0700 Subject: [PATCH 10/12] adding one line intro --- docs/intro.md | 9 +-------- 1 file changed, 1 insertion(+), 8 deletions(-) diff --git a/docs/intro.md b/docs/intro.md index f8cdc73..d755bc7 100644 --- a/docs/intro.md +++ b/docs/intro.md @@ -1,11 +1,4 @@ -# Welcome to your Jupyter Book - -This is a small sample book to give you a feel for how book content is -structured. -It shows off a few of the major file types, as well as some sample content. -It does not go in-depth into any particular topic - check out [the Jupyter Book documentation](https://jupyterbook.org) for more information. - -Check out the content pages bundled with this sample book to see more. +# Welcome to SciPy 2024 GitHub Actions for Scientific Workflows Tutorial ```{tableofcontents} ``` From 6390d38dbec9c2960f55f9f7801ac33e6f360c57 Mon Sep 17 00:00:00 2001 From: valentina Date: Mon, 24 Jun 2024 04:57:12 -0700 Subject: [PATCH 11/12] adding a bit on caching --- docs/lesson.md | 45 +++++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 45 insertions(+) diff --git a/docs/lesson.md b/docs/lesson.md index 8286c4d..990101a 100644 --- a/docs/lesson.md +++ b/docs/lesson.md @@ -71,6 +71,51 @@ After the workflow is executed `psd.png` and `broadband.png`files are updated in ![alt text](https://raw.githubusercontent.com/uwescience/SciPy2024-GitHubActionsTutorial/main/ambient_sound_analysis/img/broadband.png) +# Caching + +Dependency reinstalls between consecutive workflow runs are time consuming, and usually unnecessary. The process can be sped up by caching the builds of the packages. Caches are removed automatically if not accessed for 7 days, and their size can be up to 10GB. One can also manually remove a cache, if they want to reset the installation. + +## Caching `pip` installs + +`pip` packages can be cached by adding the `cache: 'pip'` setting to the Python setup action. If one is not using the default `requirements.txt` file for installation, they should also provide a `dependency-path`. + +![alt text](https://raw.githubusercontent.com/uwescience/SciPy2024-GitHubActionsTutorial/main/img/pip-caching.png) + +## Caching `conda` installs + +Conda packages can be similarly cached withing the conda setup action. + +![alt text](https://raw.githubusercontent.com/uwescience/SciPy2024-GitHubActionsTutorial/main/img/conda-caching.png) + +## Caching `apt-get` installs + +Packages such as `ffmpeg` can take long time to install. There is no official action to cache apt-get packages but they can be cached with the [walsh128/cache-apt-pkgs-action](https://github.com/marketplace/actions/cache-apt-packages). + +```yaml +- uses: walsh128/cache-apt-pkgs-action@latest + with: + packages: ffmpeg +``` + +## Caching any data + +The general [`cache`](https://github.com/marketplace/actions/cache) action allows to cache data at any path. Apart from builds of packages, one can use this option to not regenerate results while testing. + +```yaml +- uses: actions/cache@v4 + id: cache + with: + path: img/ + key: img + +- name: Get all files + if: steps.cache.outputs.cache-hit != 'true' + run: … +``` + +[Caching Documentation](https://docs.github.com/en/actions/using-workflows/caching-dependencies-to-speed-up-workflows) + + # Exporting Results From 225dcd704634d418c5f43c99b9a1f193c8679ffb Mon Sep 17 00:00:00 2001 From: valentina Date: Mon, 24 Jun 2024 04:58:14 -0700 Subject: [PATCH 12/12] removing lesson at upper level --- lesson.md | 217 ------------------------------------------------------ 1 file changed, 217 deletions(-) delete mode 100644 lesson.md diff --git a/lesson.md b/lesson.md deleted file mode 100644 index 8286c4d..0000000 --- a/lesson.md +++ /dev/null @@ -1,217 +0,0 @@ -# Overview - -1. [Setup](#setup) -2. [GitHub Actions Python Environment](#github-actions-python-environment-workflow) -3. [Orcasound Spectrogram Visualization Workflow](#orcasound-spectrogram-visualization-workflow) -4. [Exporting Results](#exporting-results) -5. [Visualizing Results on a Webpage](#visualizing-results-on-a-webpage) -6. [Scaling Workflows](#scaling-workflows) - -All workflow configurations are stored in the [`.github/workflows`](https://github.com/uwescience/SciPy2024-GitHubActionsTutorial/tree/main/.github/workflows) and will go through them in the following order: - -1. [`python_env.yml`](https://github.com/uwescience/SciPy2024-GitHubActionsTutorial/blob/main/.github/workflows/python_env.yml) -2. [`conda_env.yml`](https://github.com/uwescience/SciPy2024-GitHubActionsTutorial/blob/main/.github/workflows/conda_env.yml) -3. [`noise_processing.yml`](https://github.com/uwescience/SciPy2024-GitHubActionsTutorial/blob/main/.github/workflows/noise_processing.yml) -4. [`create_website_spectrogram.yml`](https://github.com/uwescience/SciPy2024-GitHubActionsTutorial/blob/main/.github/workflows/create_website_spectrogram.yml) -5. [`create_website.yml`](https://github.com/uwescience/SciPy2024-GitHubActionsTutorial/blob/main/.github/workflows/create_website.yml) -6. ... - - - - -# Setup -* Fork this repo -* Enable Github Actions: - * Settings -> Actions -> Allow actions and reusable workflows - * [Managing Permissions Documentation](https://docs.github.com/en/repositories/managing-your-repositorys-settings-and-features/enabling-features-for-your-repository/managing-github-actions-settings-for-a-repository#managing-github-actions-permissions-for-your-repository) - -# GitHub Actions Python Environment Workflow - -## Installing Packages with `pip` -First, we will run a basic workflow which creates a python environment with a few scientific packages and prints out their version. - -Python Environment Workflow Configuration: -[`.github/workflows/python_env.yml`](https://github.com/uwescience/SciPy2024-GitHubActionsTutorial/blob/main/.github/workflows/python_env.yml) - - -* Go to **Actions** tab -* Click on **Python Environment** -* Click **Run workflow**: this will manually trigger the workflow ([`dispatch_workflow`](https://docs.github.com/en/actions/managing-workflow-runs/manually-running-a-workflow)) -* Click on the newly created run to see the execution progress - - -### Exercise: -Edit [`.github/workflows/python_env.yml`](https://github.com/uwescience/SciPy2024-GitHubActionsTutorial/blob/main/.github/workflows/python_env.yml) to install packages popular in your research. Trigger the workflow to monitor their installation. - - -## Installing Packages with Conda -We can also install packages through conda (instead of `pip`). We will use a `miniconda-setup` action to achieve that easily. - - -Conda Environment Workflow Configuration [`.github/workflows/conda_env.yml`](https://github.com/uwescience/SciPy2024-GitHubActionsTutorial/blob/main/.github/workflows/conda_env.yml) - -# Orcasound Spectrogram Visualization Workflow - -Next, we will demonstrate how GitHub Actions can be used to display a spectrogram for a segment from an underwater audio stream. - -Spectrogram Visualization Workflow: [`.github/workflows/noise_processing.yml`](https://github.com/uwescience/SciPy2024-GitHubActionsTutorial/blob/main/.github/workflows/noise_processing.yml) - -Workflow Steps: - -* Generate spectrogram for a period of time (with `ambient_sound_analysis` package) - * Download data from AWS S3 bucket (in `.ts` format) for a given time period - * Convert many small `.ts` files to one file in `.wav` format - * Generate power spectrogram and store it in `.parquet` format -* Read the power spectrogram in `pandas` dataframe format -* Create plots and save them: `psd.png` and `broadband.png`. -* Upload the `.png` files to GitHub - -After the workflow is executed `psd.png` and `broadband.png`files are updated in the repo and are visualized below. -![alt text](https://raw.githubusercontent.com/uwescience/SciPy2024-GitHubActionsTutorial/main/ambient_sound_analysis/img/psd.png) - -![alt text](https://raw.githubusercontent.com/uwescience/SciPy2024-GitHubActionsTutorial/main/ambient_sound_analysis/img/broadband.png) - - -# Exporting Results - -We will discuss several different ways to export results. - -## Uploading to the GitHub Repository - -One of the easiest ways to display results is to store them in the GitHub repository. This can be a quick solution, for example, to display a small plot or a table within the `Readme.md` of the repository and update it as the workflow is rerun. This is not a practical solution for big outputs as the GitHub repositories are recommended to not exceed more than 1GB, and all versions of the files will be preserved in the repository's history (thus slowing down cloning). - -It is possible to execute all steps to add, commit, and push a file to GitHub, but there is already an [GitHub Auto Commit Action](https://github.com/marketplace/actions/git-auto-commit) to achieve that. - -![alt text](https://raw.githubusercontent.com/uwescience/SciPy2024-GitHubActionsTutorial/main/img/auto-commit-action.png) - - -## Uploading as a GitHub Workflow Artifact - -GitHub provides an option for temporary storage of GitHub Action data as Workflow Artifacts. These are kept on the GitHub website as zipped files and can downloaded within 90 days for public repositories, or 400 days for private repositories. - -There is a GitHub Action which can upload file/s as GitHub Artifacts. - -![alt text](https://raw.githubusercontent.com/uwescience/SciPy2024-GitHubActionsTutorial/main/img/artifact-upload-action.png) - -The artifact can be found by clicking on the workflow run and scrolling down to a section Artifacts. - -![alt text](https://raw.githubusercontent.com/uwescience/SciPy2024-GitHubActionsTutorial/main/img/artifact_github_interface.png) - - -The artifact can be downloaded directly from the interface but also can be downloaded through the GitHub client. - -``` -gh run download -``` - -The workflow run also provides a publicly available link to the download artifact: - -Artifact download URL: [https://github.com/uwescience/SciPy2024- -GitHubActionsTutorial/actions/runs/9591972369/artifacts/1619380017](https://github.com/uwescience/SciPy2024- -GitHubActionsTutorial/actions/runs/9591972369/artifacts/1619380017) - -There is a `download-artifact` action to download the artifacts and share between jobs within a workflow run (note this is limited to the inidividual workflow run, for downloading across runs use the other options). - -[Here](Artifact download URL: https://github.com/uwescience/SciPy2024- -GitHubActionsTutorial/actions/runs/9591972369/artifacts/1619380017) is more detailed documentation on GitHub Artifacts. - - - - -## Uploading to Personal Storage - -A more long-term solution is to store outputs to personal storage. This could be for example Google Drive or a Cloud Provider Object Storage such as an AWS S3 bucket. To have a write access to these storage systems one will need to provide the credential information securely to GitHub Actions. This can be achieved through storing the credential information as Action Secrets. - -The write operation can be performed directly from the Python code or from the GitHub Action configuration. Here will demonstrate how to upload data to Google Drive with `rclone`, a tool for transferring data between storage system which is quite provide agnostic. - -The approach consists of a few steps: - -1. use an `rclone` GitHub Action to avoid installing `rclone` manually - * we will use [AnimMouse/setup-rclone](https://github.com/marketplace/actions/setup-rclone-action) -* configure a Google Drive remote locally -* encode the text in the config file and save it as a secret `RCLONE_CONFIG` - * MacOX: `openssl base64 -in ~/.config/rclone/rclone_drive.conf` -* run the `rclone` command to upload the plots to Google Drive - * `rclone copy ambient_sound_analysis/img/broadband.png mydrive:rclone_uploads/` - - - ![alt txt](https://raw.githubusercontent.com/uwescience/SciPy2024-GitHubActionsTutorial/main/img/rclone_upload.png) - -[Secrets Documentation](https://docs.github.com/en/actions/security-guides/using-secrets-in-github-actions) - - - - -# Visualizing Results on a Webpage - -We saw it is pretty easy to continuously update results on the Readme of the repository. However, sometimes we would like to display them on a website. - - - -We will demonstrate the scenario of converting a Jupyter Notebook to a webpage. - -Notebook: [`plot_noise_levels.ipynb`](https://github.com/uwescience/SciPy2024-GitHubActionsTutorial/blob/main/ambient_sound_analysis/plot_noise_levels.ipynb) - -Create Website with Spectrogram Workflow: [`.github/workflows/create_website_spectrogram.yml`](https://github.com/uwescience/SciPy2024-GitHubActionsTutorial/blob/main/.github/workflows/create_website_spectrogram.yml) - - -The process consists of the following stages: - -* build the website: - * use `nbconvert` to convert the notebook to an html webpage - * `jupyter nbconvert plot_noise_levels.ipynb --execute --to html --output-dir=_build/html --no-input` - * upload the built content as an artifact using the `upload-pages-artifact` action - -* deploy the website (if built successfully): - * configure website with `actions/configure-pages` - * deploy website with `actions/deploy-pages` - -The website can be found here: - -[https://uwescience.github.io/SciPy2024-GitHubActionsTutorial/plot\_noise\_levels.html](https://uwescience.github.io/SciPy2024-GitHubActionsTutorial/plot_noise_levels.html) - - -The procedure is set up to run on `push` thus every time the notebook is updated the website is updated. - -The plots in the notebook use `plotly` and they have interactive features. Those are preserved in the website providing the ability to engage with the data without having to run a notebook. - -The notebook does not display any code which is conventient for showing results to the public. This was achieved by providing the `--no-input` argument to `nbconvert`. We also set the `%capture` magic in the notebook to capture some subprocess output. One can configure this further using cell tags to display content selectively. - - - - - -Other ways: - -* [Jupyterbook](https://jupyterbook.org/en/stable/publish/gh-pages.html) -* [Readthedocs](https://about.readthedocs.com/?ref=readthedocs.com) -* Jekyll template -* Dashboard - - - -# Scaling Workflows - - - - - - - - - - - - - - - - - - - - - - - -