Skip to content

Commit

Permalink
Add docs and lint files
Browse files Browse the repository at this point in the history
  • Loading branch information
Sheepsta300 committed Oct 8, 2024
1 parent 2afbff5 commit b1809ea
Show file tree
Hide file tree
Showing 2 changed files with 144 additions and 1 deletion.
143 changes: 143 additions & 0 deletions docs/docs/integrations/tools/azure_content_safety.ipynb
Original file line number Diff line number Diff line change
@@ -0,0 +1,143 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# `AzureContentSafetyTextTool`"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
">The `AzureContentSafetyTextTool` acts as a wrapper around the Azure AI Content Safety Service/API.\n",
">The Tool will detect harmful content according to Azure's Content Safety Policy."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Example\n",
"\n",
"Get the required dependencies"
]
},
{
"cell_type": "code",
"execution_count": 1,
"metadata": {},
"outputs": [],
"source": [
"import os\n",
"\n",
"from langchain import hub"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"We will use a prompt to tell the model what to do. LangChain Prompts can be configured, however for sake of simplicity we will use a premade prompt from LangSmith. This requires an API key which can be setup [here](https://www.langchain.com/langsmith) after registration."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"LANGSMITH_KEY = os.environ[\"LANGSMITH_KEY\"]\n",
"prompt = hub.pull(\"hwchase17/structured-chat-agent\", api_key=LANGSMITH_KEY)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Now we can use the `AzureContentSafetyTextTool` combine with a model, using `create_structured_chat_agent`."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from langchain.agents import AgentExecutor, create_structured_chat_agent\n",
"from langchain_community.tools.azure_ai_services.content_safety import (\n",
" AzureContentSafetyTextTool,\n",
")\n",
"from langchain_openai import AzureChatOpenAI"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"tools = [\n",
" AzureContentSafetyTextTool(\n",
" content_safety_key=os.environ[\"CONTENT_SAFETY_KEY\"],\n",
" content_safety_endpoint=os.environ[\"CONTENT_SAFETY_ENDPOINT\"],\n",
" )\n",
"]\n",
"\n",
"model = AzureChatOpenAI(\n",
" openai_api_version=os.environ[\"OPENAI_API_VERSION\"],\n",
" azure_deployment=os.environ[\"COMPLETIONS_MODEL\"],\n",
" azure_endpoint=os.environ[\"AZURE_OPENAI_ENDPOINT\"],\n",
" api_key=os.environ[\"AZURE_OPENAI_API_KEY\"],\n",
")\n",
"\n",
"agent = create_structured_chat_agent(model, tools, prompt)\n",
"\n",
"agent_executor = AgentExecutor(\n",
" agent=agent, tools=tools, verbose=True, handle_parsing_errors=True\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Then by using `.invoke`, the model can be told what to do and assess if using the tools it was given would assist in it's response."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"input = \"I hate you\"\n",
"agent_executor.invoke(\n",
" {\"input\": f\"Can you check the following text for harmful content : {input}\"}\n",
")"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.12.7"
}
},
"nbformat": 4,
"nbformat_minor": 2
}
Original file line number Diff line number Diff line change
Expand Up @@ -69,4 +69,4 @@ def test_no_harmful_content_detected(mocker: Any) -> None:
output = "Harm: 0\n"

result = tool._run(input)
assert result == output
assert result == output

0 comments on commit b1809ea

Please sign in to comment.