From 7e311aa18fd800b9cd1797180b7d54c6b5b9e19c Mon Sep 17 00:00:00 2001 From: Cedric Vidal Date: Tue, 3 Dec 2024 09:46:53 -0500 Subject: [PATCH] Fix notebook 3 static markdown + AI Foundry wording (#282) * Replaced Python cells geneting static Markdown by Markdown cells * Replaced AI Studio by AI Foundry --- docs/README.md | 2 +- docs/workshop/workshop-3-build.ipynb | 45 +++++++++++++--------------- 2 files changed, 21 insertions(+), 26 deletions(-) diff --git a/docs/README.md b/docs/README.md index c2494f36..8c05dc3a 100644 --- a/docs/README.md +++ b/docs/README.md @@ -278,7 +278,7 @@ This shows you all the Python functions that were called in order to generate th To understand how well our prompt flow performs using defined metrics like **groundedness**, **coherence** etc we can evaluate the results. To evaluate the prompt flow, we need to be able to compare it to what we see as "good results" in order to understand how well it aligns with our expectations. -We may be able to evaluate the flow manually (e.g., using Azure AI Studio) but for now, we'll evaluate this by running the prompt flow using **gpt-4** and comparing our performance to the results obtained there. To do this, follow the instructions and steps in the notebook `evaluate-chat-prompt-flow.ipynb` under the `eval` folder. +We may be able to evaluate the flow manually (e.g., using Azure AI Foundry) but for now, we'll evaluate this by running the prompt flow using **gpt-4** and comparing our performance to the results obtained there. To do this, follow the instructions and steps in the notebook `evaluate-chat-prompt-flow.ipynb` under the `eval` folder. You can also view the evaluation metrics by running the following command from the src/api folder. diff --git a/docs/workshop/workshop-3-build.ipynb b/docs/workshop/workshop-3-build.ipynb index 4b818a00..0c05773e 100644 --- a/docs/workshop/workshop-3-build.ipynb +++ b/docs/workshop/workshop-3-build.ipynb @@ -37,13 +37,12 @@ ] }, { - "cell_type": "code", - "execution_count": null, + "cell_type": "markdown", "metadata": {}, - "outputs": [], "source": [ - "from IPython.display import Markdown\n", - "Markdown(f\"cd ./src/api\")" + "```console\n", + "cd ./src/api\n", + "```" ] }, { @@ -54,13 +53,12 @@ ] }, { - "cell_type": "code", - "execution_count": null, + "cell_type": "markdown", "metadata": {}, - "outputs": [], "source": [ - "from IPython.display import Markdown\n", - "Markdown(f\"fastapi dev main.py\")" + "```console\n", + "fastapi dev main.py\n", + "```" ] }, { @@ -93,13 +91,12 @@ ] }, { - "cell_type": "code", - "execution_count": null, + "cell_type": "markdown", "metadata": {}, - "outputs": [], "source": [ - "from IPython.display import Markdown\n", - "Markdown(f\"cd ./src/web\")" + "```console\n", + "cd ./src/web\n", + "```" ] }, { @@ -110,13 +107,12 @@ ] }, { - "cell_type": "code", - "execution_count": null, + "cell_type": "markdown", "metadata": {}, - "outputs": [], "source": [ - "from IPython.display import Markdown\n", - "Markdown(f\"npm install\")" + "```console\n", + "npm install\n", + "```" ] }, { @@ -127,13 +123,12 @@ ] }, { - "cell_type": "code", - "execution_count": null, + "cell_type": "markdown", "metadata": {}, - "outputs": [], "source": [ - "from IPython.display import Markdown\n", - "Markdown(f\"npm run dev\")" + "```console\n", + "npm run dev\n", + "```" ] }, {