diff --git a/README.md b/README.md index d3133f48..1ba55a00 100644 --- a/README.md +++ b/README.md @@ -21,10 +21,6 @@ The PDL interpreter (`pdl/pdl.py`) takes a PDL program as input and renders it i See below for installation notes, followed by an [overview](#overview) of the language. A more detailed description of the language features can be found in this [tutorial](https://ibm.github.io/prompt-declaration-language/tutorial). -## Demo Video - -https://github.com/user-attachments/assets/2629bf1e-bc54-4c45-b598-47914ab05a45 - ## Interpreter Installation @@ -56,7 +52,7 @@ To run the interpreter: pdl-local ``` -The folder `examples` contains many examples of PDL programs. Several of these examples have been adapted from the LMQL [paper](https://arxiv.org/abs/2212.06094) by Beurer-Kellner et al. The examples cover a variety of prompting patterns, see [prompt-library](https://github.com/IBM/prompt-declaration-language/blob/main/examples/prompt_library) for a library of ready-to-use prompting patterns. +The folder `examples` contains many examples of PDL programs. Several of these examples have been adapted from the LMQL [paper](https://arxiv.org/abs/2212.06094) by Beurer-Kellner et al. The examples cover a variety of prompting patterns such as CoT, RAG, ReAct, and tool use. We highly recommend using VSCode to edit PDL YAML files. This project has been configured so that every YAML file is associated with the PDL grammar JSONSchema (see [settings](https://github.com/IBM/prompt-declaration-language/blob/main/.vscode/settings.json) and [schema](https://github.com/IBM/prompt-declaration-language/blob/main/pdl-schema.json)). This enables the editor to display error messages when the yaml deviates from the PDL syntax and grammar. It also provides code completion. You can set up your own VSCode PDL projects similarly using this settings and schema files. The PDL interpreter also provides similar error messages. diff --git a/docs/README.md b/docs/README.md index ba2dafa8..5092e11a 100644 --- a/docs/README.md +++ b/docs/README.md @@ -26,10 +26,6 @@ The PDL interpreter (`pdl/pdl.py`) takes a PDL program as input and renders it i See below for installation notes, followed by an [overview](#overview) of the language. A more detailed description of the language features can be found in this [tutorial](https://ibm.github.io/prompt-declaration-language/tutorial). -## Demo Video - - - ## Interpreter Installation @@ -61,7 +57,7 @@ To run the interpreter: pdl-local ``` -The folder `examples` contains many examples of PDL programs. Several of these examples have been adapted from the LMQL [paper](https://arxiv.org/abs/2212.06094) by Beurer-Kellner et al. The examples cover a variety of prompting patterns, see [prompt-library](https://github.com/IBM/prompt-declaration-language/blob/main/examples/prompt_library) for a library of ready-to-use prompting patterns. +The folder `examples` contains many examples of PDL programs. Several of these examples have been adapted from the LMQL [paper](https://arxiv.org/abs/2212.06094) by Beurer-Kellner et al. The examples cover a variety of prompting patterns such as CoT, RAG, ReAct, and tool use. We highly recommend using VSCode to edit PDL YAML files. This project has been configured so that every YAML file is associated with the PDL grammar JSONSchema (see [settings](https://github.com/IBM/prompt-declaration-language/blob/main/.vscode/settings.json) and [schema](https://github.com/IBM/prompt-declaration-language/blob/main/pdl-schema.json)). This enables the editor to display error messages when the yaml deviates from the PDL syntax and grammar. It also provides code completion. You can set up your own VSCode PDL projects similarly using this settings and schema files. The PDL interpreter also provides similar error messages. diff --git a/docs/tutorial.md b/docs/tutorial.md index e54183c3..458fd15d 100644 --- a/docs/tutorial.md +++ b/docs/tutorial.md @@ -739,231 +739,6 @@ The interpreter prints out a log by default in the file `log.txt`. This log cont To change the log filename, you can pass it to the interpreter as follows: -## Prompt Library: ReAct, ReWOO, CoT, PoT - -Some of the most common prompt patterns/techniques have been implemented as PDL functions. A demo of the ReAct template: - - - -### Chain of Thought (Wei et al., 2022) - -The most simple pattern is CoT (Chain of Thought). An example for arithmetic reasoning: - -``` -text: - - include: examples/prompt_library/CoT.pdl - - call: fewshot_cot - args: - examples: - - question: |- - Noah charges $60 for a large painting and $30 for a small painting. - Last month he sold eight large paintings and four small paintings. - If he sold twice as much this month, how much is his sales for this month? - reasoning: |- - He sold 8 large paintings and 4 small paintings last month. - He sold twice as many this month. - 8 large paintings x $60 = << 8*60= 480 >> 480 - 4 small paintings x $30 = << 4*30= 120 >> 120 - So he sold << 480+120= 600 >> 600 paintings last month. - Therefore he sold << 600*2= 1200 >> this month. - answer: $1200 - - question: |- - Noah charges $30 for a large vases and $10 for a small vases. - Last month he sold five large vases and three small vases. - If he sold three times as much this month, how much is his sales for this month? - reasoning: |- - He sold 5 large vases and 3 small vases last month. - He sold three times as many this month. - 5 large vases x $30 = << 5*30= 150 >> 150 - 3 small vases x $10 = << 3*10= 30 >> 30 - So he sold << 150+30= 180 >> 180 vases last month. - Therefore he sold << 180*3= 540 >> this month. - answer: $540 - - |- - Question: Bobby gave Alice 5 apples. Alice has 6 apples. How many apples did she have before? - - Answer: Let's think step by step. - - model: "ibm/granite-34b-code-instruct" - platform: bam -``` - -This simple template constructs fewshot examples, which should be followed by the query/question and a model call. The output up to the model call (and thus the input to the model) would look as follows: -``` -Question: Noah charges $60 for a large painting and $30 for a small painting. -Last month he sold eight large paintings and four small paintings. -If he sold twice as much this month, how much is his sales for this month? - -Answer: Let's think step by step. He sold 8 large paintings and 4 small paintings last month. -He sold twice as many this month. -8 large paintings x $60 = << 8*60= 480 >> 480 -4 small paintings x $30 = << 4*30= 120 >> 120 -So he sold << 480+120= 600 >> 600 paintings last month. -Therefore he sold << 600*2= 1200 >> this month. -The answer is $1200. - -Question: Noah charges $30 for a large vases and $10 for a small vases. -Last month he sold five large vases and three small vases. -If he sold three times as much this month, how much is his sales for this month? - -Answer: Let's think step by step. He sold 5 large vases and 3 small vases last month. -He sold three times as many this month. -5 large vases x $30 = << 5*30= 150 >> 150 -3 small vases x $10 = << 3*10= 30 >> 30 -So he sold << 150+30= 180 >> 180 vases last month. -Therefore he sold << 180*3= 540 >> this month. -The answer is $540. - -Question: Bobby gave Alice 5 apples. Alice has 6 apples. How many apples did she have before? - -Answer: Let's think step by step. -``` - -### Program of Thought (Chen, 2022) - -The PoT (Program of Thought) template includes the static fewshot prompt from (Chen, 2022). Essentially, the model is prompted to generate Python code to solve its problem, which is then executed. - -``` -text: - - include: examples/prompt_library/PoT.pdl - - def: ANSWER - call: program_of_thought - args: - question: Ketty saves 20000 dollars to the bank. After three years, the sum with compound interest rate is 1000 dollars more than the sum with simple interest rate. What is the interest rate of the bank? - model: ibm/granite-34b-code-instruct - - "\nAnswer: ${ ANSWER }" -``` - -### ReAct (Yao, 2023) - -The ReAct agent pattern is essentially a question, followed by a series of thoughts, actions, and observations, collectively called the trajectory. The input question is usually followed by a thought like `I need to search for x`. This is then followed by an action `Search[x]`, and the output of this tool cool is the observation. Finally, the agent ends the trajectory with the `Finish[answer]` action. - -This pattern is provided by `examples/prompt_library/ReAct.pdl`. It describes the tools, renders their examples, renders any user provided trajectories (e.g., multiple tool use), and handles the core loop until `Finish` is reached. - -The first building block is the `react_block` function. This function renders a trajectory, which consist of a list of single-item maps, into text. For example: - -``` -text: - - include: examples/prompt_library/ReAct.pdl - - call: react_block - args: - trajectory: - - question: "What is the elevation range for the area that the eastern sector of the Colorado orogeny extends into?" - - thought: "I need to search Colorado orogeny, find the area that the eastern sector of the Colorado ..." - - action: "Search[Colorado orogeny]" - - observation: "The Colorado orogeny was an episode of mountain building (an orogeny) ..." - - thought: "High Plains rise in elevation from around 1,800 to 7,000 ft, so the answer is 1,800 to 7,000 ft." - - action: "Finish[1,800 to 7,000 ft]" -``` - -Renders to: -``` -Question: What is the elevation range for the area that the eastern sector of the Colorado orogeny extends into? -Tho: I need to search Colorado orogeny, find the area that the eastern sector of the Colorado ... -Act: Search[Colorado orogeny] -Obs: The Colorado orogeny was an episode of mountain building (an orogeny) ... -Tho: High Plains rise in elevation from around 1,800 to 7,000 ft, so the answer is 1,800 to 7,000 ft. -Act: Finish[1,800 to 7,000 ft] -``` - -To initiate a ReAct agent, the `react` function is used. For example: -``` -text: - - include: examples/prompt_library/ReAct.pdl - - call: react - args: - question: "When did the Battle of White Plains take place?" - model: meta-llama/llama-3-70b-instruct - tools: ${ default_tools } - trajectories: [] -``` - -The output of the `react` function is currently a JSON object with one key, `answer`, containing the final (`Finish[..]`) answer. - -The `default_tools` variable is provided by the ReAct include. **Critically**, it currently only offers `Search` and `get_current_weather`. In most cases, one will want to define their own tools. Tools and their metadata must be defined, as the `react` function uses this information to describe tools to the model, and to execute model tool usage, if the action is included in the list of tool metadata. Tools are defined as follows: -``` -Search: - function: - subject: str - return: - - "[Document]\n" - - lan: python - code: | - import wikipedia - try: - result = wikipedia.summary(subject) - except wikipedia.WikipediaException as e: - result = str(e) - - "[End]\n" - -default_tools: - data: - - name: Search - description: Search Wikipedia for a summary - parameters: - - name: query - type: string - description: The topic of interest - examples: - - - question: "What is the elevation range for the area that the eastern sector of the Colorado orogeny extends into?" - - thought: "I need to search Colorado orogeny, find the area that the eastern sector of the Colorado ..." - - action: "Search[Colorado orogeny]" - - observation: "The Colorado orogeny was an episode of mountain building (an orogeny) ..." - - thought: "High Plains rise in elevation from around 1,800 to 7,000 ft, so the answer is 1,800 to 7,000 ft." - - action: "Finish[1,800 to 7,000 ft]" -``` - -The tool `name` is the most important, as this must exactly match a defined PDL function. In this example, `Search` is defined right above the tool (metadata) definition. Note that all PDL tool functions in this template accept one parameter only, a string, which must be split by your function, if multiple parameters are expected. Next, the tool must be described, and its parameters defined. This is used to describe expected input(s) to the model. The parameters are a _list_, and include types and a description. Finally, a list of example trajectories should be defined to show the model how to use the tool. These trajectories follow the `react_block` pattern described above. - -Finally, you can also add your own trajectories, for example to demonstrate use of multiple tools in one trajectory: -``` -text: - - include: examples/prompt_library/ReAct.pdl - - call: react - args: - question: "When did the Battle of White Plains take place?" - model: meta-llama/llama-3-70b-instruct - tools: ${ default_tools } - trajectories: - - - question: "What is the minimum elevation for the area that the eastern sector of the Colorado orogeny extends into, in meters?" - - thought: "I need to search Colorado orogeny, find the area that the eastern sector of the Colorado ..." - - action: "Search[Colorado orogeny]" - - observation: "The Colorado orogeny was an episode of mountain building (an orogeny) ..." - - thought: "High Plains rise in elevation from around 1,800 to 7,000 ft, I need to convert this to meters." - - action: "Calculator[1,800*0.3048]" - - observation: "548.64" - - thought: "The answer is 548.64 meters" - - action: "Finish[548.64]" -``` - -### ReWOO (Xu, 2023) - -ReWOO (Reasoning without observation) is very similar to ReAct, but is faster and uses less tokens by having the model generate a trajectory where tool use can be _composed_ by variable reference. In practice, this means the model generates a trajectory in one generation, the PDL program parses this plan and executes tools as needed, and provides all the evidence (tool output) to the model in one request. This is in contrast to ReAct, where each step results in a whole new request to the model API. - -The ReWOO function shares many similarities to the ReAct function described above. An example with a trajectory showing multiple tool use (note that these tools are not all actually defined): - -``` -text: - - include: examples/prompt_library/ReWOO.pdl - - call: rewoo - args: - task: "When did the Battle of White Plains take place?" - model: ibm/granite-34b-code-instruct - tools: ${ default_tools } - trajectories: - - - question: Thomas, Toby, and Rebecca worked a total of 157 hours in one week. Thomas worked x hours. Toby worked 10 hours less than twice what Thomas worked, and Rebecca worked 8 hours less than Toby. How many hours did Rebecca work? - - thought: Given Thomas worked x hours, translate the problem into algebraic expressions and solve with Wolfram Alpha. - - action: WolframAlpha[Solve x + (2x - 10) + ((2x - 10) - 8) = 157] - - thought: Find out the number of hours Thomas worked. - - action: "LLM[What is x, given #E1]" - - thought: Calculate the number of hours Rebecca worked. - - action: "Calculator[(2 * #E2 - 10) - 8]" - show_plans: true -``` - - -The tool definitions are the same as for ReAct, and so are the trajectories. However, one difference is `show_plans`, which renders the parsed plans (e.g., the function calls), mostly as a debugging feature. - - ## Live Document Visualizer PDL has a Live Document visualizer to help in program understanding given an execution trace. diff --git a/examples/gsm8k/math-patterns.pdl b/examples/gsm8k/math-patterns.pdl deleted file mode 100644 index 9a98a4b3..00000000 --- a/examples/gsm8k/math-patterns.pdl +++ /dev/null @@ -1,32 +0,0 @@ -description: math problem -defs: - available_tools: - data: ["Search"] - prompt_pattern: "CoT" - -# try all examples and collect which ones _fail_ -# cluster by similarity, or classify e.g. 5 classes that don't work - -text: - - include: ../prompt_library/ReWOO.pdl - - include: ../prompt_library/ReAct.pdl - - include: ../prompt_library/CoT.pdl - - def: filtered_tools - call: filter_tools_by_name - contribute: [] - args: - tools: ${ default_tools } - tool_names: ${ available_tools } - - call: react - args: - question: "When did the Battle of White Plains take place?" - model: watsonx/meta-llama/llama-3-70b-instruct - tools: ${ filtered_tools } - - | - ${ demos }Question: ${ question } - Answer: Let's think step by step. - - model: ${ model } - def: PDL - parameters: - stop_sequences: ["<|endoftext|>"] - include_stop_sequence: false diff --git a/examples/prompt_library/CoT.pdl b/examples/prompt_library/CoT.pdl deleted file mode 100644 index 2022e673..00000000 --- a/examples/prompt_library/CoT.pdl +++ /dev/null @@ -1,77 +0,0 @@ -description: CoT pattern introduced by Wei et al. (2022) -defs: - # Chain of Thought - cot_block: - function: - question: str - reasoning: str - answer: str - return: |+ - Question: ${question} - - Answer: Let's think step by step. ${reasoning} - The answer is ${answer}. - - # Auto Chain of Thought Zhang et al. (2022) - # The idea is to use a _model_ to generate a reasoning path, even if not very accurate. - # It is best combined with some fewshot examples - auto_chain_of_thought: - function: - question: str - model: str - answer: str - return: - - |- - Question: ${question} - - Answer: Let's think step by step. - - model: ${ model } - parameters: - decoding_method: "greedy" - stop_sequences: - - "The answer is" - include_stop_sequence: false - - "The answer is ${ answer }." - - fewshot_cot: - function: - examples: - { list: { obj: { question: str, reasoning: str, answer: str } } } - return: - for: - example: ${ examples } - repeat: - call: cot_block - args: - question: ${ example.question } - reasoning: ${ example.reasoning } - answer: ${ example.answer } - - - chain_of_thought: - function: - question: str - model: str - examples: { list: { obj: { question: str, reasoning: str, answer: str } } } - return: - - call: fewshot_cot - args: - examples: ${ examples } - - |- - Question: ${question} - - Answer: Let's think step by step. - - model: ${ model } - parameters: - decoding_method: "greedy" - stop_sequences: - - "The answer is" - include_stop_sequence: false - - "The answer is " - - def: answer - model: ${ model } - parameters: - decoding_method: "greedy" - - "\n\nJSON Output: " - - data: - answer: ${ answer|trim } \ No newline at end of file diff --git a/examples/prompt_library/PoT.pdl b/examples/prompt_library/PoT.pdl deleted file mode 100644 index 91c71441..00000000 --- a/examples/prompt_library/PoT.pdl +++ /dev/null @@ -1,231 +0,0 @@ -description: Program of Thoughts pattern Chen (2022), TMLR -defs: - program_of_thought: - function: - question: str - model: str - return: - - | - Question: Janet's ducks lay 16 eggs per day. She eats three for breakfast every morning and bakes muffins for her friends every day with four. She sells the remainder at the farmers' market daily for $2 per fresh duck egg. How much in dollars does she make every day at the farmers' market? - # Python code, return ans - total_eggs = 16 - eaten_eggs = 3 - baked_eggs = 4 - sold_eggs = total_eggs - eaten_eggs - baked_eggs - dollars_per_egg = 2 - result = sold_eggs * dollars_per_egg - - Question: A robe takes 2 bolts of blue fiber and half that much white fiber. How many bolts in total does it take? - # Python code, return ans - bolts_of_blue_fiber = 2 - bolts_of_white_fiber = num_of_blue_fiber / 2 - result = bolts_of_blue_fiber + bolts_of_white_fiber - - Question: Josh decides to try flipping a house. He buys a house for $80,000 and then puts in $50,000 in repairs. This increased the value of the house by 150%. How much profit did he make? - # Python code, return ans - cost_of_original_house = 80000 - increase_rate = 150 / 100 - value_of_house = (1 + increase_rate) * cost_of_original_house - cost_of_repair = 50000 - result = value_of_house - cost_of_repair - cost_of_original_house - - Question: Every day, Wendi feeds each of her chickens three cups of mixed chicken feed, containing seeds, mealworms and vegetables to help keep them healthy. She gives the chickens their feed in three separate meals. In the morning, she gives her flock of chickens 15 cups of feed. In the afternoon, she gives her chickens another 25 cups of feed. How many cups of feed does she need to give her chickens in the final meal of the day if the size of Wendi's flock is 20 chickens? - # Python code, return ans - numb_of_chickens = 20 - cups_for_each_chicken = 3 - cups_for_all_chicken = num_of_chickens * cups_for_each_chicken - cups_in_the_morning = 15 - cups_in_the_afternoon = 25 - result = cups_for_all_chicken - cups_in_the_morning - cups_in_the_afternoon - - Question: Kylar went to the store to buy glasses for his new apartment. One glass costs $5, but every second glass costs only 60% of the price. Kylar wants to buy 16 glasses. How much does he need to pay for them? - # Python code, return ans - num_glasses = 16 - first_glass_cost = 5 - second_glass_cost = 5 * 0.6 - result = 0 - for i in range(num_glasses): - if i % 2 == 0: - result += first_glass_cost - else: - result += second_glass_cost - - Question: Marissa is hiking a 12-mile trail. She took 1 hour to walk the first 4 miles, then another hour to walk the next two miles. If she wants her average speed to be 4 miles per hour, what speed (in miles per hour) does she need to walk the remaining distance? - # Python code, return ans - average_mile_per_hour = 4 - total_trail_miles = 12 - remaining_miles = total_trail_miles - 4 - 2 - total_hours = total_trail_miles / average_mile_per_hour - remaining_hours = total_hours - 2 - result = remaining_miles / remaining_hours - - Question: Carlos is planting a lemon tree. The tree will cost $90 to plant. Each year it will grow 7 lemons, which he can sell for $1.5 each. It costs $3 a year to water and feed the tree. How many years will it tak - e before he starts earning money on the lemon tree? - # Python code, return ans - total_cost = 90 - cost_of_watering_and_feeding = 3 - cost_of_each_lemon = 1.5 - num_of_lemon_per_year = 7 - result = 0 - while total_cost > 0: - total_cost += cost_of_watering_and_feeding - total_cost -= num_of_lemon_per_year * cost_of_each_lemon - result += 1 - - Question: When Freda cooks canned tomatoes into sauce, they lose half their volume. Each 16 ounce can of tomatoes that she uses contains three tomatoes. Freda's last batch of tomato sauce made 32 ounces of sauce. How many tomatoes did Freda use? - # Python code, return ans - lose_rate = 0.5 - num_tomato_contained_in_per_ounce_sauce = 3 / 16 - ounce_sauce_in_last_batch = 32 - num_tomato_in_last_batch = ounce_sauce_in_last_batch * num_tomato_contained_in_per_ounce_sauce - result = num_tomato_in_last_batch / (1 - lose_rate) - - Question: Jordan wanted to surprise her mom with a homemade birthday cake. From reading the instructions, she knew it would take 20 minutes to make the cake batter and 30 minutes to bake the cake. The cake would require 2 hours to cool and an additional 10 minutes to frost the cake. If she plans to make the cake all on the same day, what is the latest time of day that Jordan can start making the cake to be ready to serve it at 5:00 pm? - # Python code, return ans - minutes_to_make_batter = 20 - minutes_to_bake_cake = 30 - minutes_to_cool_cake = 2 * 60 - minutes_to_frost_cake = 10 - total_minutes = minutes_to_make_batter + minutes_to_bake_cake + minutes_to_cool_cake + minutes_to_frost_cake - total_hours = total_minutes / 60 - result = 5 - total_hours - - Question: ${ question } - # Python code, return ans - - - def: PROGRAM - model: ${ model } - parameters: - stop_sequences: ["\nAnswer: "] - include_stop_sequence: false - - def: ANSWER - lan: python - contribute: [] - code: ${ PROGRAM } - - get: ANSWER - contribute: [] - - program_of_thought_backtick: - function: - question: str - model: str - return: - - | - Question: Janet's ducks lay 16 eggs per day. She eats three for breakfast every morning and bakes muffins for her friends every day with four. She sells the remainder at the farmers' market daily for $2 per fresh duck egg. How much in dollars does she make every day at the farmers' market? - # Python code, return ans - ```python - total_eggs = 16 - eaten_eggs = 3 - baked_eggs = 4 - sold_eggs = total_eggs - eaten_eggs - baked_eggs - dollars_per_egg = 2 - result = sold_eggs * dollars_per_egg - ``` - - Question: A robe takes 2 bolts of blue fiber and half that much white fiber. How many bolts in total does it take? - # Python code, return ans - ```python - bolts_of_blue_fiber = 2 - bolts_of_white_fiber = num_of_blue_fiber / 2 - result = bolts_of_blue_fiber + bolts_of_white_fiber - ``` - - Question: Josh decides to try flipping a house. He buys a house for $80,000 and then puts in $50,000 in repairs. This increased the value of the house by 150%. How much profit did he make? - # Python code, return ans - ```python - cost_of_original_house = 80000 - increase_rate = 150 / 100 - value_of_house = (1 + increase_rate) * cost_of_original_house - cost_of_repair = 50000 - result = value_of_house - cost_of_repair - cost_of_original_house - ``` - - Question: Every day, Wendi feeds each of her chickens three cups of mixed chicken feed, containing seeds, mealworms and vegetables to help keep them healthy. She gives the chickens their feed in three separate meals. In the morning, she gives her flock of chickens 15 cups of feed. In the afternoon, she gives her chickens another 25 cups of feed. How many cups of feed does she need to give her chickens in the final meal of the day if the size of Wendi's flock is 20 chickens? - # Python code, return ans - ```python - numb_of_chickens = 20 - cups_for_each_chicken = 3 - cups_for_all_chicken = num_of_chickens * cups_for_each_chicken - cups_in_the_morning = 15 - cups_in_the_afternoon = 25 - result = cups_for_all_chicken - cups_in_the_morning - cups_in_the_afternoon - ``` - - Question: Kylar went to the store to buy glasses for his new apartment. One glass costs $5, but every second glass costs only 60% of the price. Kylar wants to buy 16 glasses. How much does he need to pay for them? - # Python code, return ans - ```python - num_glasses = 16 - first_glass_cost = 5 - second_glass_cost = 5 * 0.6 - result = 0 - for i in range(num_glasses): - if i % 2 == 0: - result += first_glass_cost - else: - result += second_glass_cost - ``` - - Question: Marissa is hiking a 12-mile trail. She took 1 hour to walk the first 4 miles, then another hour to walk the next two miles. If she wants her average speed to be 4 miles per hour, what speed (in miles per hour) does she need to walk the remaining distance? - # Python code, return ans - ```python - average_mile_per_hour = 4 - total_trail_miles = 12 - remaining_miles = total_trail_miles - 4 - 2 - total_hours = total_trail_miles / average_mile_per_hour - remaining_hours = total_hours - 2 - result = remaining_miles / remaining_hours - ``` - - Question: Carlos is planting a lemon tree. The tree will cost $90 to plant. Each year it will grow 7 lemons, which he can sell for $1.5 each. It costs $3 a year to water and feed the tree. How many years will it tak - e before he starts earning money on the lemon tree? - # Python code, return ans - ```python - total_cost = 90 - cost_of_watering_and_feeding = 3 - cost_of_each_lemon = 1.5 - num_of_lemon_per_year = 7 - result = 0 - while total_cost > 0: - total_cost += cost_of_watering_and_feeding - total_cost -= num_of_lemon_per_year * cost_of_each_lemon - result += 1 - ``` - - Question: When Freda cooks canned tomatoes into sauce, they lose half their volume. Each 16 ounce can of tomatoes that she uses contains three tomatoes. Freda's last batch of tomato sauce made 32 ounces of sauce. How many tomatoes did Freda use? - # Python code, return ans - ```python - lose_rate = 0.5 - num_tomato_contained_in_per_ounce_sauce = 3 / 16 - ounce_sauce_in_last_batch = 32 - num_tomato_in_last_batch = ounce_sauce_in_last_batch * num_tomato_contained_in_per_ounce_sauce - result = num_tomato_in_last_batch / (1 - lose_rate) - ``` - - Question: Jordan wanted to surprise her mom with a homemade birthday cake. From reading the instructions, she knew it would take 20 minutes to make the cake batter and 30 minutes to bake the cake. The cake would require 2 hours to cool and an additional 10 minutes to frost the cake. If she plans to make the cake all on the same day, what is the latest time of day that Jordan can start making the cake to be ready to serve it at 5:00 pm? - # Python code, return ans - ```python - minutes_to_make_batter = 20 - minutes_to_bake_cake = 30 - minutes_to_cool_cake = 2 * 60 - minutes_to_frost_cake = 10 - total_minutes = minutes_to_make_batter + minutes_to_bake_cake + minutes_to_cool_cake + minutes_to_frost_cake - total_hours = total_minutes / 60 - result = 5 - total_hours - ``` - - Question: ${ question } - # Python code, return ans - - def: PROGRAM - model: ${ model } - parser: - regex: '```.*\n((?:.|\n|$)*?)$\n\s*```' # extracts code from backtick blocks - mode: findall - parameters: - stop_sequences: ["\nAnswer: "] - include_stop_sequence: false - - def: ANSWER - lan: python - contribute: [] - code: ${ PROGRAM|join('\n') } - - get: ANSWER - contribute: [] \ No newline at end of file diff --git a/examples/prompt_library/RAG.pdl b/examples/prompt_library/RAG.pdl deleted file mode 100644 index 19c2acf9..00000000 --- a/examples/prompt_library/RAG.pdl +++ /dev/null @@ -1,27 +0,0 @@ -description: Retrieval-Augmented Generation (RAG) following Lewis et al. -defs: - # Corpus: Store the retrieval object in the PDL session - corpus: - function: - corpus: {list: str} - return: - - lan: python - contribute: [] - code: | - from rank_bm25 import BM25Okapi - PDL_SESSION.corpus = corpus - PDL_SESSION.tokenized_corpus = [doc.split(" ") for doc in corpus] - PDL_SESSION.bm25_corpus = BM25Okapi(PDL_SESSION.tokenized_corpus) - result = None - # Retrieve from corpus in PDL session - retrieve: - function: - query: str - num_examples: int - spec: {list: str} - return: - - lan: python - code: | - from rank_bm25 import BM25Okapi - tokenized_query = query.split(" ") - result = PDL_SESSION.bm25_corpus.get_top_n(tokenized_query, PDL_SESSION.corpus, n=num_examples) \ No newline at end of file diff --git a/examples/prompt_library/ReAct.pdl b/examples/prompt_library/ReAct.pdl deleted file mode 100644 index b9aebc94..00000000 --- a/examples/prompt_library/ReAct.pdl +++ /dev/null @@ -1,195 +0,0 @@ -description: ReAct pattern from Yao et al., [ICLR 2023](https://openreview.net/forum?id=WE_vluYUL-X) -# See alternative implementation here: https://smith.langchain.com/hub/hwchase17/react-chat -defs: - react_block: - function: - trajectory: { list: obj } - return: - - for: - trajectory: ${ trajectory } - repeat: - - defs: - type: ${ trajectory.keys()|first } - - if: ${ type == 'question'} - then: | - Question: ${ trajectory[type]|trim } - - if: ${ type == 'thought'} - then: | - Tho: ${ trajectory[type]|trim } - - if: ${ type == 'action'} - then: | - Act: ${ trajectory[type]|trim } - - if: ${ type == 'observation'} - then: | - Obs: ${ trajectory[type]|trim } - - if: ${ type not in ['question', 'thought', 'action', 'observation'] } - then: "${ type }: ${ trajectory[type]|trim }" - - "\n" - - demonstrate_tools: - function: - tools: { list: obj } - return: - for: - tool: ${ tools } - repeat: - for: - example: ${ tool.examples } - repeat: - call: react_block - args: - trajectory: ${ example } - - react: - function: - question: str - model: str - tools: { list: obj } - trajectories: { list: list } - return: - - defs: - TOOL_INFO: - call: list_tools - args: - tools: ${ tools } - - "Available tools:\n" - - for: - name: ${ TOOL_INFO.names } - sig: ${ TOOL_INFO.signatures } - desc: ${ TOOL_INFO.descriptions } - repeat: | - ${ name }: ${ desc } - - "Finish: Respond with the Answer\n" - - "\n" - - call: demonstrate_tools - args: - tools: ${ tools } - - for: - traj: ${ trajectories } - repeat: - call: react_block - args: - trajectory: ${ traj } - - "Question: ${ question }\nTho:" - - defs: - temperature: 0.7 - repeat: - - repeat: - - def: THOUGHT - model: ${ model } - parameters: - decoding_method: sample - temperature: ${ temperature } - stop_sequences: ["\n", "Act:", "Obs:", "Tho:"] - include_stop_sequence: true - until: ${ THOUGHT.endswith('Act:') } - - def: action_raw - model: ${ model } - parameters: - decoding_method: sample - temperature: ${ temperature } - stop_sequences: ["[", "\n"] - include_stop_sequence: false - - defs: - ACTION: ${ action_raw|trim } - - "[" - - def: SUBJECT - model: ${ model } - parameters: - decoding_method: sample - temperature: ${ temperature } - stop_sequences: ["]", "\n"] - include_stop_sequence: false - - "]" - - if: ${ ACTION != 'Finish' } - then: - - "\nObs: " - - if: ${ ACTION in TOOL_INFO.names } - then: - - call: ${ ACTION } - args: - subject: ${ SUBJECT } - - model: ${ model } - parameters: - decoding_method: sample - temperature: ${ temperature } - stop_sequences: ["\n", "Act:", "Obs:", "Tho:"] - include_stop_sequence: false - else: "Invalid action. Valid actions are ${ TOOL_INFO.signatures|join(', ') } and Finish[]." - until: ${ ACTION == 'Finish' } - - "\n\nJSON Output: " - - data: - answer: ${ SUBJECT|trim } - - react_json: - function: - question: str - model: str - tools: { list: obj } - trajectories: { list: list } - return: - - defs: - TOOL_INFO: - call: list_tools - args: - tools: ${ tools } - - "Available tools:\n" - - for: - name: ${ TOOL_INFO.names } - sig: ${ TOOL_INFO.signatures } - desc: ${ TOOL_INFO.descriptions } - repeat: | - ${ name }: ${ desc } - - "Finish: Respond with the Answer\n" - - "\n" - - call: demonstrate_tools - args: - tools: ${ tools } - - for: - traj: ${ trajectories } - repeat: - call: react_block - args: - trajectory: ${ traj } - - "Question: ${ question }\nTho:" - - defs: - temperature: 0.05 - decoding_method: "greedy" - repeat: - - repeat: - - def: THOUGHT - model: ${ model } - parameters: - DECODING_METHOD: ${ decoding_method } - TEMPERATURE: ${ temperature } - STOP_SEQUENCES: ["\n", "Act:", "Obs:", "Tho:"] - INCLUDE_STOP_SEQUENCE: true - until: ${ THOUGHT.endswith('Act:') } - - def: action - model: ${ model } - parser: json - spec: {name: str, arguments: obj} - parameters: - DECODING_METHOD: ${ decoding_method } - TEMPERATURE: ${ temperature } - STOP_SEQUENCES: ["\n", "<|endoftext|>"] - INCLUDE_STOP_SEQUENCE: false - - if: ${ action.name != 'Finish' } - then: - - "\nObs: " - - if: ${ action.name in TOOL_INFO.names } - then: - - call: ${ action.name } - args: - arguments: ${ action.arguments } - - model: ${ model } - parameters: - DECODING_METHOD: ${ decoding_method } - TEMPERATURE: ${ temperature } - STOP_SEQUENCES: ["\n", "Act:", "Obs:", "Tho:"] - INCLUDE_STOP_SEQUENCE: false - else: "Invalid action. Valid actions are ${ TOOL_INFO.signatures|join(', ') } and Finish[]." - until: ${ action.name == 'Finish' } - - "\n\nJSON Output: " - - data: - answer: ${ action.arguments } diff --git a/examples/prompt_library/ReWoo.pdl b/examples/prompt_library/ReWoo.pdl deleted file mode 100644 index fbbb1adf..00000000 --- a/examples/prompt_library/ReWoo.pdl +++ /dev/null @@ -1,177 +0,0 @@ -description: ReWOO (Reasoning without observation) pattern from Xu et al., (http://arxiv.org/abs/2305.18323) -# Compared to ReAct, reduced token consumption (and thus execution time), -# by generating full chain of tools in a single pass -# see: https://github.com/langchain-ai/langgraph/blob/main/examples/rewoo/rewoo.ipynb -defs: - rewoo_block: - function: - trajectory: { list: obj } - return: - - def: i - contribute: [] - data: 1 - - for: - trajectory: ${ trajectory } - repeat: - - defs: - type: - text: ${ trajectory.keys()|first } - content: - text: ${ trajectory.values()|first } - - if: ${ type == 'question'} - then: | - Task: ${ content } - - if: ${ type == 'thought'} - then: |- - Plan: ${ content } - - if: ${ type == 'action'} - then: - - " #E${ i } = ${ content }\n" - - defs: - i: - data: ${ i+1 } - - if: ${ type == 'observation'} - then: "" - - if: ${ type not in ['question', 'thought', 'action', 'observation'] } - then: "${ type }: ${ content }" - - "\n" - - rewoo: - function: - task: str - model: str - tools: { list: obj } - trajectories: { list: list } - show_plans: bool - return: - - defs: - TOOL_INFO: - call: list_tools - args: - tools: ${ tools } - - | - For the following task, make plans that can solve the problem step by step. For each plan, indicate - which external tool together with tool input to retrieve evidence. You can store the evidence into a - variable #E that can be called by later tools. (Plan, #E1, Plan, #E2, Plan, ...) - - Tools can be one of the following: - - for: - i: ${ range(1, (tools|length)+1)|list } - name: ${ TOOL_INFO.names } - sig: ${ TOOL_INFO.signatures } - desc: ${ TOOL_INFO.descriptions } - repeat: | - (${i}) ${ sig }: ${ desc } - - "\n" - - for: - tool: ${ tools } - repeat: - for: - example: ${ tool.examples } - repeat: - call: rewoo_block - args: - trajectory: ${ example } - - for: - traj: ${ trajectories } - repeat: - call: rewoo_block - args: - trajectory: ${ traj } - - "\n" - - | - Begin! - Describe your plans with rich details. Each Plan should be followed by only one #E. - - Task: ${ task } - - def: PLANS - model: ${ model } - parser: # plan, step_name, tool, tool_input - regex: 'Plan:\s*(?P(?:.|\n)*?)\s*(?P#E\d+)\s*=\s*(?P\w+)\s*\[(?P[^\]]+)\]' - mode: findall - parameters: - decoding_method: greedy - stop_sequences: - - "<|endoftext|>" - include_stop_sequence: false - - if: ${ show_plans } - then: - - "\n\n\u001b[34m--- Extracted Blueprint ---\n" - - for: - plan: ${ PLANS } - repeat: - - "\u001b[31mPlan: ${ plan[0] }\n" - - "\u001b[32m${ plan[1] } = ${ plan[2] }[${ plan[3] }]\n" - - "\n\u001b[37m" - - defs: - output: - data: {} - plans: - for: - plan: ${ PLANS } - repeat: - - defs: - PLAN: ${ plan[0] } - ID: ${ plan[1] } - ACTION: ${ plan[2] } - SUBJECT: ${ plan[3] } - SUBJECT_REPLACED: - lan: python - code: |- - for k,v in output.items(): - if k in SUBJECT: - SUBJECT = SUBJECT.replace(k, v) - result = SUBJECT - raw_tool_output: - if: ${ ACTION in TOOL_INFO.names } - then: - call: ${ ACTION } - args: - subject: ${ SUBJECT_REPLACED } - else: "Invalid action. Valid actions are ${ TOOL_INFO.signatures|join(', ') } and Finish[]." - tool_output: ${ raw_tool_output } - - def: output - contribute: [] - lan: python - code: | - output[ID] = str(tool_output) - result = output - # - data: - # plan: ${ PLAN } - # key: ${ ID } - # value: ${ tool_output } - # subject: ${ SUBJECT } - # subject_replaced: ${ SUBJECT_REPLACED } - - | - Plan: ${ PLAN } - Evidence: ${ tool_output } - # - def: EVIDENCE - # contribute: [] - # text: - # for: - # plan: ${ plans } - # repeat: - # - | - # Plan: ${ plan.plan } - # Evidence: ${ plan.value } - - def: solution_input - text: |- - Solve the following task or problem. To solve the problem, we have made step-by-step Plan and retrieved corresponding Evidence to each Plan. Use them with caution since long evidence might contain irrelevant information. - - ${ plans|join } - Now solve the question or task according to provided Evidence above. Respond with the answer directly with no extra words. - - Task: ${ task } - Response: - - def: SOLUTION - model: ${ model } - parameters: - decoding_method: greedy - stop_sequences: - - "<|endoftext|>" - include_stop_sequence: false - input: - text: ${ solution_input } - - "\n\nJSON Output: " - - data: - answer: ${ SOLUTION } diff --git a/examples/prompt_library/demos/CoT.pdl b/examples/prompt_library/demos/CoT.pdl deleted file mode 100644 index 9307a7b1..00000000 --- a/examples/prompt_library/demos/CoT.pdl +++ /dev/null @@ -1,35 +0,0 @@ -description: Demo of CoT template -text: - - include: ../CoT.pdl - - call: fewshot_cot - args: - examples: - - question: |- - Noah charges $60 for a large painting and $30 for a small painting. - Last month he sold eight large paintings and four small paintings. - If he sold twice as much this month, how much is his sales for this month? - reasoning: |- - He sold 8 large paintings and 4 small paintings last month. - He sold twice as many this month. - 8 large paintings x $60 = << 8*60= 480 >> 480 - 4 small paintings x $30 = << 4*30= 120 >> 120 - So he sold << 480+120= 600 >> 600 paintings last month. - Therefore he sold << 600*2= 1200 >> this month. - answer: $1200 - - question: |- - Noah charges $30 for a large vases and $10 for a small vases. - Last month he sold five large vases and three small vases. - If he sold three times as much this month, how much is his sales for this month? - reasoning: |- - He sold 5 large vases and 3 small vases last month. - He sold three times as many this month. - 5 large vases x $30 = << 5*30= 150 >> 150 - 3 small vases x $10 = << 3*10= 30 >> 30 - So he sold << 150+30= 180 >> 180 vases last month. - Therefore he sold << 180*3= 540 >> this month. - answer: $540 - - call: auto_chain_of_thought - args: - question: "Noah has 2 apples. A friend gives him 3 more. How many apples does Noah have?" - model: "ibm/granite-34b-code-instruct" - answer: "5" diff --git a/examples/prompt_library/demos/PoT.pdl b/examples/prompt_library/demos/PoT.pdl deleted file mode 100644 index cace4d3b..00000000 --- a/examples/prompt_library/demos/PoT.pdl +++ /dev/null @@ -1,15 +0,0 @@ -description: Demo of PoT template -text: - - include: ../PoT.pdl - - def: ANSWER - call: program_of_thought - args: - question: Ketty saves 20000 dollars to the bank. After three years, the sum with compound interest rate is 1000 dollars more than the sum with simple interest rate. What is the interest rate of the bank? - model: ibm/granite-34b-code-instruct - - "\nAnswer: ${ ANSWER }" - - def: ANSWER - call: program_of_thought_backtick - args: - question: Ketty saves 20000 dollars to the bank. After three years, the sum with compound interest rate is 1000 dollars more than the sum with simple interest rate. What is the interest rate of the bank? Split your answer into two separate code blocks. - model: ibm/granite-34b-code-instruct - - "\nAnswer: ${ ANSWER }" \ No newline at end of file diff --git a/examples/prompt_library/demos/RAG.pdl b/examples/prompt_library/demos/RAG.pdl deleted file mode 100644 index 95c1c75d..00000000 --- a/examples/prompt_library/demos/RAG.pdl +++ /dev/null @@ -1,13 +0,0 @@ -description: Demo of RAG template -text: - - include: ../RAG.pdl - - call: corpus - args: - corpus: - - "Hello there good man!" - - "It is quite windy in London" - - "How is the weather today?" - - call: retrieve - args: - query: "windy London" - num_examples: 2 diff --git a/examples/prompt_library/demos/ReAct.pdl b/examples/prompt_library/demos/ReAct.pdl deleted file mode 100644 index d71e8b1f..00000000 --- a/examples/prompt_library/demos/ReAct.pdl +++ /dev/null @@ -1,19 +0,0 @@ -description: Demo of ReAct template -defs: - available_tools: - data: ["Search"] -text: - - include: ../tools.pdl - - include: ../ReAct.pdl - - def: filtered_tools - call: filter_tools_by_name - contribute: [] - args: - tools: ${ default_tools } - tool_names: ${ available_tools } - - call: react_json - args: - question: "When did the Battle of White Plains take place?" - model: meta-llama/llama-3-70b-instruct - tools: ${ filtered_tools } - trajectories: [] \ No newline at end of file diff --git a/examples/prompt_library/demos/ReWOO.pdl b/examples/prompt_library/demos/ReWOO.pdl deleted file mode 100644 index bf84d7f4..00000000 --- a/examples/prompt_library/demos/ReWOO.pdl +++ /dev/null @@ -1,26 +0,0 @@ -description: Demo of ReAct template -defs: - available_tools: - data: ["Search"] -text: - - include: ../ReWOO.pdl - # - def: filtered_tools - # call: filter_tools_by_name - # contribute: [] - # args: - # tools: ${ default_tools } - # tool_names: ${ available_tools } - - call: rewoo - args: - task: "When did the Battle of White Plains take place?" - model: ibm/granite-34b-code-instruct - tools: ${ default_tools } - trajectories: - - - question: Thomas, Toby, and Rebecca worked a total of 157 hours in one week. Thomas worked x hours. Toby worked 10 hours less than twice what Thomas worked, and Rebecca worked 8 hours less than Toby. How many hours did Rebecca work? - - thought: Given Thomas worked x hours, translate the problem into algebraic expressions and solve with Wolfram Alpha. - - action: WolframAlpha[Solve x + (2x - 10) + ((2x - 10) - 8) = 157] - - thought: Find out the number of hours Thomas worked. - - action: "LLM[What is x, given #E1]" - - thought: Calculate the number of hours Rebecca worked. - - action: "Calculator[(2 * #E2 - 10) - 8]" - show_plans: true diff --git a/examples/prompt_library/demos/Verifier.pdl b/examples/prompt_library/demos/Verifier.pdl deleted file mode 100644 index dd7a0073..00000000 --- a/examples/prompt_library/demos/Verifier.pdl +++ /dev/null @@ -1,40 +0,0 @@ -description: Demo of ReAct template -defs: - available_tools: - data: ["Search"] -text: - - include: ../ReAct.pdl - - include: ../../granite/granite_defs.pdl - - def: filtered_tools - call: filter_tools_by_name - contribute: [] - args: - tools: ${ default_tools } - tool_names: ${ available_tools } - - def: QUESTION - contribute: [] - read: - message: "Please enter a question: " - - def: GRANITE_RESULT - call: react - args: - context: ${ granite_models.granite_7b_lab.system_prompt } - question: ${ QUESTION } - model: "ibm/granite-7b-lab" - tools: ${ filtered_tools } - trajectories: [] - - "\n\n-------- Verifying answer --------\n\n" - - def: LLAMA_RESULT - call: react - args: - context: "" - question: |- - Is this the right answer to this question? - ${ QUESTION } - Proposed answer: ${ GRANITE_RESULT.answer } - - Please answer as True or False. - model: "ibm-meta/llama-2-70b-chat-q" - tools: ${ filtered_tools } - trajectories: [] - - "\n\nThe answer '${ GRANITE_RESULT.answer }' has been verified as '${LLAMA_RESULT.answer}'." diff --git a/examples/prompt_library/demos/Verifier_json.pdl b/examples/prompt_library/demos/Verifier_json.pdl deleted file mode 100644 index 00b289dd..00000000 --- a/examples/prompt_library/demos/Verifier_json.pdl +++ /dev/null @@ -1,38 +0,0 @@ -text: -- include: ../tools.pdl -- include: ../ReAct.pdl -- include: ../../granite/granite_defs.pdl -- def: filtered_tools - call: filter_tools_by_name - contribute: [] - args: - tools: ${ default_tools } - tool_names: ["Search"] -- def: QUESTION - read: - message: "Please enter a question: " -- def: PROPOSED - call: react_json - args: - context: - - role: system - content: ${ granite_models.granite_7b_lab.system_prompt } - question: ${ QUESTION } - model: ibm/granite-7b-lab - tools: ${ filtered_tools } - trajectories: [] -- "\n\n----- Verifying answer... -----\n\n" -- def: VERIFIED - call: react_json - args: - context: [{"role": "system", "content": ""}] - question: |- - Is this the right answer to this question? - ${ QUESTION } - Proposed answer: ${ PROPOSED.answer.topic } - - Please answer as True or False. - model: ibm/granite-34b-code-instruct - tools: ${ filtered_tools } - trajectories: [] -- "\n\nThe answer '${ PROPOSED.answer.topic }' has been verified as '${VERIFIED.answer.topic}'.\n" diff --git a/examples/prompt_library/demos/gsm8k/CoT.pdl b/examples/prompt_library/demos/gsm8k/CoT.pdl deleted file mode 100644 index ca783b51..00000000 --- a/examples/prompt_library/demos/gsm8k/CoT.pdl +++ /dev/null @@ -1,32 +0,0 @@ -description: Demo of CoT template -text: - - include: ../CoT.pdl - - call: chain_of_thought - args: - examples: - - question: |- - Noah charges $60 for a large painting and $30 for a small painting. - Last month he sold eight large paintings and four small paintings. - If he sold twice as much this month, how much is his sales for this month? - reasoning: |- - He sold 8 large paintings and 4 small paintings last month. - He sold twice as many this month. - 8 large paintings x $60 = << 8*60= 480 >> 480 - 4 small paintings x $30 = << 4*30= 120 >> 120 - So he sold << 480+120= 600 >> 600 paintings last month. - Therefore he sold << 600*2= 1200 >> this month. - answer: $1200 - - question: |- - Noah charges $30 for a large vases and $10 for a small vases. - Last month he sold five large vases and three small vases. - If he sold three times as much this month, how much is his sales for this month? - reasoning: |- - He sold 5 large vases and 3 small vases last month. - He sold three times as many this month. - 5 large vases x $30 = << 5*30= 150 >> 150 - 3 small vases x $10 = << 3*10= 30 >> 30 - So he sold << 150+30= 180 >> 180 vases last month. - Therefore he sold << 180*3= 540 >> this month. - answer: $540 - question: "Jake earns thrice what Jacob does. If Jacob earns $6 per hour, how much does Jake earn in 5 days working 8 hours a day?" - model: "meta-llama/llama-3-70b-instruct" diff --git a/examples/prompt_library/demos/gsm8k/ReAct.pdl b/examples/prompt_library/demos/gsm8k/ReAct.pdl deleted file mode 100644 index 8eac65a5..00000000 --- a/examples/prompt_library/demos/gsm8k/ReAct.pdl +++ /dev/null @@ -1,35 +0,0 @@ -description: Demo of ReAct template -defs: - math_tools: - data: - - name: Calculator - description: Evaluates expressions using Python - parameters: - - name: expression - type: string - description: The mathematical expression to evaluate with a Python interpreter. - examples: - - - question: |- - Noah charges $60 for a large painting and $30 for a small painting. - Last month he sold eight large paintings and four small paintings. - If he sold twice as much this month, how much is his sales for this month? - - thought: |- - He sold 8 large paintings and 4 small paintings last month. - He sold twice as many this month. I need to calculate (8 large paintings x $60 + 4 small paintings x $30) - - action: Calculator[8*60+4*30] - - observation: 600 - - thought: |- - So he sold 600 paintings last month. He sold twice as many this month, therefore I need to calculate 600*2. - - action: Calculator[600*2] - - observation: 1200 - - thought: He sold $1200 this month. - - action: Finish[$1200] -text: - - include: ../tools.pdl - - include: ../ReAct.pdl - - call: react - args: - question: "Jake earns thrice what Jacob does. If Jacob earns $6 per hour, how much does Jake earn in 5 days working 8 hours a day?" - model: "meta-llama/llama-3-70b-instruct" - tools: ${ math_tools } - trajectories: [] \ No newline at end of file diff --git a/examples/prompt_library/demos/gsm8k/ReWoo.pdl b/examples/prompt_library/demos/gsm8k/ReWoo.pdl deleted file mode 100644 index b6726219..00000000 --- a/examples/prompt_library/demos/gsm8k/ReWoo.pdl +++ /dev/null @@ -1,36 +0,0 @@ -description: Demo of ReAct template -defs: - math_tools: - data: - - name: Calculator - description: Evaluates expressions using Python - parameters: - - name: expression - type: string - description: The mathematical expression to evaluate with a Python interpreter. - examples: - - - question: |- - Noah charges $60 for a large painting and $30 for a small painting. - Last month he sold eight large paintings and four small paintings. - If he sold twice as much this month, how much is his sales for this month? - - thought: |- - He sold 8 large paintings and 4 small paintings last month. - He sold twice as many this month. I need to calculate (8 large paintings x $60 + 4 small paintings x $30) - - action: Calculator[8*60+4*30] - - observation: 600 - - thought: |- - So he sold 600 paintings last month. He sold twice as many this month, therefore I need to calculate #E1*2. - - action: Calculator[#E1*2] - - observation: 1200 - # - thought: "He sold #E2 this month." - # - action: "Finish[#E2]" -text: - - include: examples/prompt_library/tools.pdl - - include: examples/prompt_library/ReWoo.pdl - - call: rewoo - args: - task: "Jake earns thrice what Jacob does. If Jacob earns $6 per hour, how much does Jake earn in 5 days working 8 hours a day?" - model: meta-llama/llama-3-70b-instruct - tools: ${ math_tools } - trajectories: [] - show_plans: true \ No newline at end of file diff --git a/examples/prompt_library/tools.pdl b/examples/prompt_library/tools.pdl deleted file mode 100644 index 127c09f5..00000000 --- a/examples/prompt_library/tools.pdl +++ /dev/null @@ -1,215 +0,0 @@ -description: Toolbox of PDL functions for agents -defs: - # Note: Although PDL functions can be properly typed, - # the input to a function via the LLM is fundamentally a string. - # Therefore, parsing the input is the responsibility of the - # function, not the caller. In the future, one could - # imagine the use of constrained decoding to force - # LLM to produce a type-compliant JSON as input. - - wrap_document: - data: true - Search_old: - function: - subject: str - return: - - defs: - result: - lan: python - code: | - import warnings, wikipedia - warnings.simplefilter("ignore") - try: - result = wikipedia.summary(subject) - except wikipedia.WikipediaException as e: - result = str(error) - - if: ${ wrap_document } - then: "[Document]\n${ result }\n[End]" - else: ${ result } - - Search: - function: - arguments: obj - return: - - defs: - result: - lan: python - code: | - import warnings, wikipedia - warnings.simplefilter("ignore") - - def main(topic: str, *args, **kwargs) -> str: - try: - return wikipedia.summary(topic) - except wikipedia.WikipediaException as e: - return str(e) - - result = main(**arguments) - - if: ${ wrap_document } - then: "[Document]\n${ result }\n[End]" - else: ${ result } - - default_model: "ibm/granite-34-code-instruct" - LLM: - function: - subject: str - return: - model: ${ default_model } - parameters: - stop_sequences: - - "<|endoftext|>" - include_stop_sequence: false - decoding_method: greedy - - Calculator: - function: - subject: str - return: - lan: python - code: | - import math - result = ${ subject } - - get_current_weather: - function: - subject: str - return: - - lan: python - contribute: [] - code: | - import requests - response = requests.get('https://api.weatherapi.com/v1/current.json?key=cf601276764642cb96224947230712&q=${ subject }') - result = response.content - - default_tools: - data: - - name: get_current_weather - description: Get the current weather - parameters: - - name: location - type: string - description: The city and state, e.g. San Francisco, CA - examples: - - - question: "What is the weather in London?" - - action: "get_current_weather[London]" - - observation: | - {"location":{"name":"London","region":"City of London, Greater London","country":"United Kingdom","lat":51.52,"lon":-0.11,"tz_id":"Europe/London","localtime_epoch":1722262564,"localtime":"2024-07-29 15:16"},"current":{"last_updated_epoch":1722262500,"last_updated":"2024-07-29 15:15","temp_c":27.9,"temp_f":82.2,"is_day":1,"condition":{"text":"Sunny","icon":"//cdn.weatherapi.com/weather/64x64/day/113.png","code":1000},"wind_mph":8.1,"wind_kph":13.0,"wind_degree":133,"wind_dir":"SE","pressure_mb":1019.0,"pressure_in":30.09,"precip_mm":0.0,"precip_in":0.0,"humidity":33,"cloud":6,"feelslike_c":27.2,"feelslike_f":80.9,"windchill_c":27.9,"windchill_f":82.2,"heatindex_c":27.2,"heatindex_f":80.9,"dewpoint_c":10.3,"dewpoint_f":50.5,"vis_km":10.0,"vis_miles":6.0,"uv":7.0,"gust_mph":9.3,"gust_kph":14.9}} - - action: "Finish[The weather in London is 82.2f and sunny.]" - - name: LLM - description: Call another LLM - parameters: - - name: query - type: string - description: The prompt - examples: - - name: Calculator - description: Run a calculator - parameters: - - name: query - type: string - description: The equation - examples: - - name: Wikipedia - description: Search Wikipedia for a summary - parameters: - - name: query - type: string - description: The topic of interest - examples: - - - question: "What is the elevation range for the area that the eastern sector of the Colorado orogeny extends into?" - - thought: "I need to search Colorado orogeny, find the area that the eastern sector of the Colorado ..." - - action: "Search[Colorado orogeny]" - - observation: "The Colorado orogeny was an episode of mountain building (an orogeny) ..." - - thought: "High Plains rise in elevation from around 1,800 to 7,000 ft, so the answer is 1,800 to 7,000 ft." - - action: "Finish[1,800 to 7,000 ft]" - - - question: "What profession does Nicholas Ray and Elia Kazan have in common?" - - thought: "I need to search Nicholas Ray and Elia Kazan, find their professions, then find the profession they have in common." - - action: "Search[Nicholas Ray]" - - observation: "Nicholas Ray (born Raymond Nicholas Kienzle Jr., August 7, 1911 - June 16, 1979) was an American film director, screenwriter, and actor best known for the 1955 film Rebel Without a Cause." - - thought: "Professions of Nicholas Ray are director, screenwriter, and actor. I need to search Elia Kazan next and find his professions." - - action: "Search[Elia Kazan]" - - observation: "Elia Kazan was an American film and theatre director, producer, screenwriter and actor." - - thought: "Professions of Elia Kazan are director, producer, screenwriter, and actor. So profession Nicholas Ray and Elia Kazan have in common is director, screenwriter, and actor." - - action: "Finish[director, screenwriter, actor]" - - name: Search - description: Search Wikipedia for a summary - parameters: - - name: topic - type: string - description: The topic of interest - examples: - - - question: "What is the elevation range for the area that the eastern sector of the Colorado orogeny extends into?" - - thought: "I need to search Colorado orogeny, find the area that the eastern sector of the Colorado ..." - - action: | - {"name": "Search", "arguments": {"topic": "Colorado orogeny"}} - - observation: "The Colorado orogeny was an episode of mountain building (an orogeny) ..." - - thought: "It does not mention the eastern sector. So I need to look up eastern sector." - - thought: "High Plains rise in elevation from around 1,800 to 7,000 ft, so the answer is 1,800 to 7,000 ft." - - action: | - {"name": "Finish", "arguments": {"topic": "1,800 to 7,000 ft"}} - - - question: "What profession does Nicholas Ray and Elia Kazan have in common?" - - thought: "I need to search Nicholas Ray and Elia Kazan, find their professions, then find the profession they have in common." - - action: | - {"name": "Search", "arguments": {"topic": "Nicholas Ray"}} - - observation: "Nicholas Ray (born Raymond Nicholas Kienzle Jr., August 7, 1911 - June 16, 1979) was an American film director, screenwriter, and actor best known for the 1955 film Rebel Without a Cause." - - thought: "Professions of Nicholas Ray are director, screenwriter, and actor. I need to search Elia Kazan next and find his professions." - - action: | - {"name": "Search", "arguments": {"topic": "Elia Kazan"}} - - observation: "Elia Kazan was an American film and theatre director, producer, screenwriter and actor." - - thought: "Professions of Elia Kazan are director, producer, screenwriter, and actor. So profession Nicholas Ray and Elia Kazan have in common is director, screenwriter, and actor." - - action: | - {"name": "Finish", "arguments": {"topic": "director, screenwriter, actor"}} - - name: Search_old - description: Search Wikipedia for a summary - parameters: - - name: query - type: string - description: The topic of interest - examples: - - - question: "What is the elevation range for the area that the eastern sector of the Colorado orogeny extends into?" - - thought: "I need to search Colorado orogeny, find the area that the eastern sector of the Colorado ..." - - action: "Search[Colorado orogeny]" - - observation: "The Colorado orogeny was an episode of mountain building (an orogeny) ..." - - thought: "High Plains rise in elevation from around 1,800 to 7,000 ft, so the answer is 1,800 to 7,000 ft." - - action: "Finish[1,800 to 7,000 ft]" - - - question: "What profession does Nicholas Ray and Elia Kazan have in common?" - - thought: "I need to search Nicholas Ray and Elia Kazan, find their professions, then find the profession they have in common." - - action: "Search[Nicholas Ray]" - - observation: "Nicholas Ray (born Raymond Nicholas Kienzle Jr., August 7, 1911 - June 16, 1979) was an American film director, screenwriter, and actor best known for the 1955 film Rebel Without a Cause." - - thought: "Professions of Nicholas Ray are director, screenwriter, and actor. I need to search Elia Kazan next and find his professions." - - action: "Search[Elia Kazan]" - - observation: "Elia Kazan was an American film and theatre director, producer, screenwriter and actor." - - thought: "Professions of Elia Kazan are director, producer, screenwriter, and actor. So profession Nicholas Ray and Elia Kazan have in common is director, screenwriter, and actor." - - action: "Finish[director, screenwriter, actor]" - - filter_tools_by_name: - function: - tools: { list: obj } - tool_names: { list: str } - return: - data: ${ tools|selectattr('name', 'in', tool_names)|list } - - list_tools: - function: - tools: { list: obj } - return: - - defs: - signatures: - for: - tool: ${ tools } - repeat: "${ tool.name }[<${ tool.parameters|join('>, <', attribute='name') }>]" - typed_signatures: - for: - tool: ${ tools } - repeat: - - defs: - parameters: - for: - param: ${ tool.parameters } - repeat: "${ param.name }: ${ param.type }" - - "${ tool.name }(${ parameters|join(', ') })" - - data: - names: ${ tools|map(attribute='name')|list } - signatures: ${ signatures } - typed_signatures: ${ typed_signatures } - descriptions: ${ tools|map(attribute='description')|list }