From 220b238e63f1403e81a318cda8e07a1756c3bffc Mon Sep 17 00:00:00 2001 From: erzhen Date: Tue, 19 Sep 2023 16:44:03 -0400 Subject: [PATCH] update todo and author names --- src/content/post/week3.md | 14 ++++++++++---- 1 file changed, 10 insertions(+), 4 deletions(-) diff --git a/src/content/post/week3.md b/src/content/post/week3.md index cf107c8..c4d1768 100644 --- a/src/content/post/week3.md +++ b/src/content/post/week3.md @@ -9,6 +9,11 @@ slug = "week3" # Prompt Engineering (Week 3) +Presenting Team: Group 3 + +Blogging Team: Aparna Kishore, Erzhen Hu, Elena Long, Jingping Wan + + - [(Monday, 09/11/2023) Prompt Engineering](#monday-09112023-prompt-engineering) - [Warm-up questions](#warm-up-questions) - [What is Prompt Engineering?](#what-is-prompt-engineering) @@ -50,7 +55,7 @@ Figure 1 shows the prompting for the first question and the answer from GPT3.5.
-
Figure 1: Prompting for arithmetic question/td> +Figure 1: Prompting for arithmetic question
For the second question, providing an example and an explanation behind the reasoning on how to reach the final answer helped GPT produce the correct answer. Here, the prompt included explicitly stating that the magic box can also convert from coins to apples. @@ -154,11 +159,12 @@ The second paper[^2] shows that LLMs do not always say what they think, especial In conclusion, prompts can be controversial and not always perfect. -TODO: find the actual papers, put in links and make the references more complete. + + -[^1]: Zhao, Zihao, et al. "Calibrate before use: Improving few-shot performance of language models." International Conference on Machine Learning. PMLR, 2021. +[^1]: Tony Z. Zhao, Eric Wallace, Shi Feng, Dan Klein, Sameer Singh. "[Calibrate before use: Improving few-shot performance of language models](https://arxiv.org/pdf/2102.09690.pdf)." International Conference on Machine Learning. PMLR, 2021. -[^2]: Turpin, Miles, et al. "Language Models Don't Always Say What They Think: Unfaithful Explanations in Chain-of-Thought Prompting." arXiv preprint arXiv:2305.04388 (2023). +[^2]: Miles Turpin, Julian Michael, Ethan Perez, Samuel R. Bowman. "[Language Models Don't Always Say What They Think: Unfaithful Explanations in Chain-of-Thought Prompting.](https://arxiv.org/pdf/2305.04388.pdf)" arXiv preprint arXiv:2305.04388, 2023. # (Wednesday, 09/13/2023) Marked Personas