Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Feature/content safety #3

Open
wants to merge 22 commits into
base: master
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
22 commits
Select commit Hold shift + click to select a range
9a7d45a
work on first version of content safety tool
Sheepsta300 Sep 11, 2024
e04b3c8
lint file
Sheepsta300 Sep 11, 2024
f3e6cfa
update init
Sheepsta300 Sep 11, 2024
c18d000
update class to use `__init__` to validate environment instead of `ro…
Sheepsta300 Sep 11, 2024
e086054
adhere to linting recommendations
Sheepsta300 Sep 11, 2024
c1943e4
change description to ensure model's give correct input
Sheepsta300 Sep 11, 2024
fe863b3
reformat file with ruff
Sheepsta300 Sep 11, 2024
28fb0e7
change return type of function
Sheepsta300 Sep 11, 2024
61a815f
Update class to use new v3 `pydantic` validation methods
Sheepsta300 Oct 4, 2024
2afbff5
Add unit tests and required dependencies
Sheepsta300 Oct 8, 2024
b1809ea
Add docs and lint files
Sheepsta300 Oct 8, 2024
ef328e7
Add missing headers to docs and update attributes in class
Sheepsta300 Oct 8, 2024
e8b1415
Merge branch 'langchain-ai:master' into feature/content_safety
Sheepsta300 Oct 8, 2024
7bc9d2a
Add remaining missing headers according to CI
Sheepsta300 Oct 8, 2024
a2c4582
Merge branch 'feature/content_safety' of https://github.com/Sheepsta3…
Sheepsta300 Oct 8, 2024
ac350e3
Rearrange headers to try fix CI error
Sheepsta300 Oct 8, 2024
3fb48a5
Rearrange headers
Sheepsta300 Oct 8, 2024
1f30d14
Change Tool Functions to Tool functions
Sheepsta300 Oct 8, 2024
e5fb363
Change order of cells
Sheepsta300 Oct 8, 2024
4915fe0
Add outputs to docs
Sheepsta300 Oct 15, 2024
71ae221
Add suggested changes to guide and class code
Sheepsta300 Dec 10, 2024
75bcf2a
Lint file
Sheepsta300 Dec 10, 2024
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
314 changes: 314 additions & 0 deletions docs/docs/integrations/tools/azure_content_safety.ipynb
Original file line number Diff line number Diff line change
@@ -0,0 +1,314 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# `AzureContentSafetyTextTool`"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Overview"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Integration details"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
">The `AzureContentSafetyTextTool` acts as a wrapper around the Azure AI Content Safety Service/API.\n",
">The Tool will detect harmful content according to Azure's Content Safety Policy."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Tool features"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"This integration allows for the detection of harmful or offensive content in text using Azure's Content Safety API. It supports four categories of harmful content: Sexual, Harm, Self-Harm, and Violence."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Setup"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"This section provides details about how the Azure AI Content Safety integration works, including setup."
]
},
{
"cell_type": "code",
"execution_count": 1,
"metadata": {},
"outputs": [],
"source": [
"import getpass\n",
"import os"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Now we can use the `AzureContentSafetyTextTool` combined with a model, using `create_react_agent`."
]
},
{
"cell_type": "code",
"execution_count": 3,
"metadata": {},
"outputs": [],
"source": [
"from langgraph.prebuilt import create_react_agent"
]
},
{
"cell_type": "code",
"execution_count": 7,
"metadata": {},
"outputs": [],
"source": [
"from langchain_openai import AzureChatOpenAI\n",
"\n",
"from libs.community.langchain_community.tools.azure_ai_services.content_safety import (\n",
" AzureContentSafetyTextTool,\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Credentials"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Credentials can be set by being passed as parameters and should be stored locally as environment variables."
]
},
{
"cell_type": "code",
"execution_count": 14,
"metadata": {},
"outputs": [],
"source": [
"os.environ[\"CONTENT_SAFETY_ENDPOINT\"] = getpass.getpass()\n",
"os.environ[\"CONTENT_SAFETY_KEY\"] = getpass.getpass()"
]
},
{
"cell_type": "code",
"execution_count": 4,
"metadata": {},
"outputs": [],
"source": [
"content_endpoint = os.environ[\"CONTENT_SAFETY_ENDPOINT\"]\n",
"content_key = os.environ[\"CONTENT_SAFETY_API_KEY\"]"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Instantiation"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Credentials can be passed directly, but they can also be retrieved automatically by the constructor if environment variables named `CONTENT_SAFETY_ENDPOINT` and `CONTENT_SAFETY_KEY` are set."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"os.environ[\"AZURE_OPENAI_API_KEY\"] = getpass.getpass()\n",
"os.environ[\"OPENAI_API_VERSION\"] = getpass.getpass()\n",
"os.environ[\"GPT_MODEL\"] = getpass.getpass()\n",
"os.environ[\"AZURE_OPENAI_ENDPOINT\"] = getpass.getpass()"
]
},
{
"cell_type": "code",
"execution_count": 8,
"metadata": {},
"outputs": [],
"source": [
"model = AzureChatOpenAI(\n",
" openai_api_version=os.environ[\"OPENAI_API_VERSION\"],\n",
" azure_deployment=os.environ[\"GPT_MODEL\"],\n",
" azure_endpoint=os.environ[\"AZURE_OPENAI_ENDPOINT\"],\n",
" api_key=os.environ[\"AZURE_OPENAI_API_KEY\"],\n",
")"
]
},
{
"cell_type": "code",
"execution_count": 9,
"metadata": {},
"outputs": [],
"source": [
"cs = AzureContentSafetyTextTool(\n",
" content_safety_key=content_key,\n",
" content_safety_endpoint=content_endpoint,\n",
")\n",
"\n",
"tools = [cs]"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Create a react agent to invoke the tool. "
]
},
{
"cell_type": "code",
"execution_count": 10,
"metadata": {},
"outputs": [],
"source": [
"agent = create_react_agent(model, tools)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Invocation"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Input must be in the form of a string (`str`)."
]
},
{
"cell_type": "code",
"execution_count": 11,
"metadata": {},
"outputs": [],
"source": [
"input = \"I hate you\""
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### [Invoke directly with args](/docs/concepts/#invoke-with-just-the-arguments)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Firstly, the tool can be used by directly passing input as an argument. However, this is discouraged as the tool is intended to be used in an executor chain."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"cs.invoke({\"query\": input})"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### [Invoke with ToolCall](/docs/concepts/#invoke-with-toolcall)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"By using `.invoke`, the model can be told what to do and assess if using the tools it was given would assist in it's response. This is the intended use."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"agent.invoke(\n",
" {\n",
" \"messages\": [\n",
" (\"user\", f\"Can you check the following text for harmful content : {input}\")\n",
" ]\n",
" }\n",
")"
]
},
{
kristapratico marked this conversation as resolved.
Show resolved Hide resolved
"cell_type": "markdown",
"metadata": {},
"source": [
"## API reference"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"[Azure AI Content Safety Overview](https://learn.microsoft.com/azure/ai-services/content-safety/overview) | [Azure AI Content Safety Python API](https://learn.microsoft.com/python/api/overview/azure/ai-contentsafety-readme?view=azure-python)"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.12.8"
}
},
"nbformat": 4,
"nbformat_minor": 2
}
1 change: 1 addition & 0 deletions libs/community/extended_testing_deps.txt
Original file line number Diff line number Diff line change
Expand Up @@ -4,6 +4,7 @@ anthropic>=0.3.11,<0.4
arxiv>=1.4,<2
assemblyai>=0.17.0,<0.18
atlassian-python-api>=3.36.0,<4
azure-ai-contentsafety>=1.0.0
azure-ai-documentintelligence>=1.0.0b1,<2
azure-identity>=1.15.0,<2
azure-search-documents==11.4.0
Expand Down
Loading
Loading