From c5f851ed479d101c1f01a7e1652b5f1be82f9c07 Mon Sep 17 00:00:00 2001 From: Sean Kwach Wasonga Date: Wed, 3 Apr 2024 17:14:42 +0300 Subject: [PATCH] Create readme.md --- .../readme.md | 33 +++++++++++++++++++ 1 file changed, 33 insertions(+) create mode 100644 Customer Guides/Prompting Tips for Copilot For Security/readme.md diff --git a/Customer Guides/Prompting Tips for Copilot For Security/readme.md b/Customer Guides/Prompting Tips for Copilot For Security/readme.md new file mode 100644 index 00000000..70a2209d --- /dev/null +++ b/Customer Guides/Prompting Tips for Copilot For Security/readme.md @@ -0,0 +1,33 @@ +![Security CoPilot Logo](https://github.com/Azure/Copilot-For-Security/blob/main/Images/ic_fluent_copilot_64_64%402x.png) + +# Prompting Tips with Microsoft Copilot for Security + +| Best Practice | Description | +|-------------------------------------------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| +| **State your Goal** | What are you hoping to achieve out your inquiry (prompt)? Make Copilot aware of your intended goal. | +| **Provide Context & Set Expectations** | Be specific and provide context when asking questions, such as the name of the plugin, skill, or the promptbook you’d like to use. Be mindful of the words you use in your prompts and how they may affect the chatbot's response. Words matter and they can influence the selection of skills. Include modifiers such as format, audience, length, and confidence to guide the model and avoid ambiguity. | +| **Provide the Source** | State the source of the information you seek in your prompt, especially if there are multiple sources which could extract similar information. | +| **Continue to Iterate** | Iterate and refine your prompts if you don't get the expected results. Use different words or phrases to ask the same question if you don't get the desired result the first time. The chatbot may recognize synonyms or alternative expressions better. | +| **Be Positive and Respectful** | Use positive instructions instead of negative ones. Be respectful and appropriate for the workplace and avoid any biased, inappropriate, or violent content. Use the singular "they" pronoun to refer to people and avoid guessing their gender, roles, or feelings. | +| **Treat Copilot as your Compadre** | Address Copilot as "you" instead of "model" or "assistant". | +| **Treat new Copilot sessions as if they are a Stranger** | Sessions currently cannot reference other sessions. Meaning, if you ask a question in relation to an answer provided in a previous session, your new Copilot struggle will likely struggle to address without enough context. Please be mindful of this working with Copilot. | +| **Make Copilot do the Heavy Lifting when it comes to summarizing your sessions** | Concluding your session, leverage Copilot for Security to summarize the prompts’ responses for whatever audience you choose. | +| **Bridge your KQL knowledge gap with NL2KQL skills and KQL custom plugins** | Use the natural language to KQL (NL2KQL) plugin to hunt for information in your data sources using plain English. Junior KQL users will appreciate our KQL custom plugins to execute common KQL queries via natural language. | +| **Automate wherever possible** | Use promptbooks and custom promptbooks to automate common security workflows, which require iterative responses to collate and summarize the information you seek. | +| **It's okay to experiment. In fact, it’s encouraged!** | We’re venturing unchartered territory as a market with Security LLMs. Don’t be afraid to experiment with different ways of framing your prompts and evaluate the results based on your needs and expectations. You may be pleasantly surprised to find new things you never thought possible. Share your learnings with us in the comments of our blog. | + + +Following best practices above, listed below are good and bad examples of prompts intended to support various security-related use cases. + +| Quality | Examples | +|---------|----------| +| **Bad** | Provide an incident summary. | +| **Good** | Provide a summary for incident 19247 from Defender catered to a non-technical executive audience. | +| **Better** | Provide a summary for incident 19247 from Defender catered to a non-technical executive audience. List the entities of the incident in a table providing context from MDTI. | +| **Best** | Provide a summary for incident 19247 from Defender catered to a non-technical executive audience. List the entities of the incident in a table which includes the following headers, “Entity”, “Entity Type”, “MDTI reputation”. Within entity, please list the entity associated with the incident or incident’s alerts. Within Entity Type, please list what type of entity it is. For example, domain name, IP address, URL, hash. Within the MDTI reputation, please enrich the entity against MDTI’s Copilot reputation skill. | + +| **Bad** | Create a KQL query to hunt for hexadecimal strings. | +| **Good** | Create a KQL query to hunt for hexadecimal strings associated with svchost.exe process. | +| **Better** | Prompt 1: Create a Defender KQL query to hunt for hexadecimal strings associated with the svchost.exe process.
Prompt 2: What threat actor groups tend to use this svchost.exe process?
Prompt 3: What are the TTPs associated with these threat actor groups?
Prompt 4: Please create a table to list the MITRE ATT&CK techniques associated with each threat actor group as unique rows. List each MITRE ATT&CK technique associated with each threat actor group in column 1 “MITRE ATT&CK technique” and which threat actor groups used that technique in column 2, “Threat Actor Group(s)”. | +| **Best** | Prompt 1: Create a Defender KQL query to hunt for hexadecimal strings associated with the svchost.exe process.
Prompt 2: What threat actor groups tend to use this svchost.exe process?
Prompt 3: What are the TTPs associated with these threat actor groups?
Prompt 4: Please create a table to list the MITRE ATT&CK techniques associated with each threat actor group as unique rows. List each MITRE ATT&CK technique associated with each threat actor group in column 1 “MITRE ATT&CK technique” and which threat actor groups used that technique in column 2, “Threat Actor Group(s)”.
Prompt 5: Based on the MITRE ATT&CK techniques gathered in the previous response, which ones do not have analytic (detection) rule coverage based on what our organization has configured in our Sentinel workspace?
Prompt 6: Which CVEs do these threat actor groups tend to exploit?
Prompt 7: What threat intelligence exists associated with each of these CVEs from MDTI?
Prompt 8: What threat intelligence exists associated with each of these CVEs from inthewild.io?
Prompt 9: What remediation and/or mitigation recommendations are associated with each of these CVEs from MDTI?
Prompt 10: Which of my MDEASM, MDVM, MDC, and IoT assets are vulnerable to these CVEs?
Prompt 11: Based on the incident comments (or wherever you document your postmortem steps), have these recommendations been followed?
\[Save this as a custom promptbook\] | +