Skip to content

Latest commit

 

History

History
196 lines (162 loc) · 7.87 KB

ChatGPT Prompt Engineering for Developers (DeepLearning.AI course).md

File metadata and controls

196 lines (162 loc) · 7.87 KB
created modified tags type status
2024-11-04T21:25
2025-01-26 20:16
llm
large-language-model
prompt
prompting
nlp
natural-language-processing
prompt-engineering
note
in-progress

Although I've been generally disparaging of the field of "prompt engineering" (especially the inclusion of the word "engineering"), I actually really enjoyed and would recommend this short course.

This note is my own brief summary of the course content.

  • Guidelines
    • Principle 1: Write clear and specific instructions
      • Tactic 1: Use delimiters to clearly separate different parts of the input. example
      • Tactic 2: Request structured output from the LLM (e.g. HTML, JSON, XML). example
      • Tactic 3: Ask the model to explicitly check whether certain conditions are satisfied. example
      • Tactic 4: Few shot prompting. Give the model a small number of (or even many) examples of what you want it to do. example
    • Principle 2: Give the model time to "think"
      • Tactic 1: Provide a list of explicit sequential steps for the model to follow. example
      • Tactic 2: Request output in a specific standard output format. example
      • Tactic 3: Tell the model to perform the task itself first (and use this for comparison) rather than reviewing someone else's work directly. example
  • Iterative Prompt Development
    • Treat the creation of the prompt as you would training of a Machine Learning model i.e. iteratively improve the prompt, at each step comparing/evaluating the model output against a predefined set of test cases and/or metrics
    • Be structured about iterative development. Keep a record of what was tried and what the outcome was
    • Possibly consider also a train/test split so that the prompt is not overfit to a small specific set of evaluation examples
  • Summarising
    • LLMs are amazing at interpreting natural language and manipulating text.
    • They can easily generate summaries of different types, or for different audiences e.g. "Your task is to summarise the following text, including only details relevant to the pricing department"
  • Inferring
    • LLMs are very good at classification tasks based on raw text input e.g. sentiment analysis, labelling, categorisation, topic extraction, tagging, information extraction etc. They can do this via prompting alone i.e. no separate training or fine-tuning steps are required. example
  • Transforming
    • LLMs are very good at mutating/reforming/translating text. Some example uses:
      • Identifying what language a piece of text is in
      • Translating between different languages (e.g. English -> Chinese)
      • Changing writing tone (e.g. informal to formal, verbose/academic to simple)
      • Translating between different computer formats e.g. html->markdown or XML->JSON etc.
      • Spelling, grammar and style checking

Examples

Guidelines >> Principle 1 >> Tactic 1 : Use delimiters to clearly separate different parts of the input

prompt = f"""
Summarize the text delimited by triple apostrophes into a single sentence.
'''{input_text}'''
"""

Guidelines >> Principle 1 >> Tactic 2 : Request structured output (e.g. HTML, JSON)

prompt = """
Generate a list of three made-up book titles along with their authors and genres. 
Provide them in JSON format with the following keys: 
book_id, title, author, genre.
"""

Guidelines >> Principle 1 >> Tactic 3: Ask the model to explicitly check whether certain conditions are satisfied

prompt = f"""
Translate the text contained within angle brackets <> into French.
<{input_text}>
If no text is provided, or if the input is not in English, then respond (in English) with "I am unable to process your request" and provide the reason that you cannot process it. 
"""

Guidelines >> Principle 1 >> Tactic 4: Few-Shot Prompting

prompt = """
Your task is to answer in a consistent style.

<child>: Teach me about patience.

<grandparent>: The river that carves the deepest valley flows from a modest spring; the grandest symphony originates from a single note; the most intricate tapestry begins with a solitary thread.

<child>: Teach me about resilience. 

<grandparent>: 
""".strip()

Guidelines >> Principle 2 >> Tactic 1: Provide a list of explicit sequential steps for the model to follow

prompt = f"""
Perform the following actions: 
1 - Summarize the following text delimited by triple apostrophes with 1 sentence.
2 - Translate the summary into French.
3 - List each name in the French summary.
4 - Output a json object that contains the following keys: french_summary, num_names.

Separate your answers with line breaks.

Text:
'''{text}'''

""".strip()

Guidelines >> Principle 2 >> Tactic 2: Request output in a specific standard output format

prompt = f"""
Your task is to perform the following actions: 
1 - Summarize the following text delimited by <> with 1 sentence.
2 - Translate the summary into French.
3 - List each name in the French summary.
4 - Output a json object that contains the following keys: french_summary, num_names.

Use the following format:
Text: <text to summarize>
Summary: <summary>
Translation: <summary translation>
Names: <list of names in summary>
Output JSON: <json with summary and num_names>

Text: <{text}>

""".strip()

Guidelines >> Principle 2 >> Tactic 3: Tell the model to perform the task itself first (and use this for comparison) rather than reviewing someone else's work directly

prompt = f"""
Your task is to determine if the student's solution is correct or not.
To solve the problem do the following:
- First, work out your own solution to the problem including the final total. 
- Then compare your solution to the student's solution and evaluate if the student's solution is correct or not. 
Don't decide if the student's solution is correct until you have done the problem yourself.

Use the following output format:

Question:
~~~
original question here
~~~

Student's solution:
~~~
student's solution here
~~~

Actual solution:
~~~
steps to work out the solution and your solution here
~~~

Is the student's solution the same as actual solution just calculated?:
~~~
yes or no
~~~

Student grade:
~~~
correct or incorrect
~~~

Here is the student's solution:
{students_solution_text}
""".strip()

Inferring

prompt = f"""
Identify the following items from the review text: 
- Sentiment (positive or negative)
- Is the reviewer expressing anger? (true or false)
- Item purchased by reviewer
- Company that made the item

The review is delimited with triple backticks. 
Format your response as a JSON object with "Sentiment", "Anger", "Item" and "Brand" as the keys.
If the information isn't present, use "unknown" as the value.
Make your response as short as possible.
Format the Anger value as a boolean.

Review text: '''{review_text}'''
""".strip()

References

Related