Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Feature/content safety #3

Open
wants to merge 22 commits into
base: master
Choose a base branch
from
Open
Changes from 8 commits
Commits
Show all changes
22 commits
Select commit Hold shift + click to select a range
9a7d45a
work on first version of content safety tool
Sheepsta300 Sep 11, 2024
e04b3c8
lint file
Sheepsta300 Sep 11, 2024
f3e6cfa
update init
Sheepsta300 Sep 11, 2024
c18d000
update class to use `__init__` to validate environment instead of `ro…
Sheepsta300 Sep 11, 2024
e086054
adhere to linting recommendations
Sheepsta300 Sep 11, 2024
c1943e4
change description to ensure model's give correct input
Sheepsta300 Sep 11, 2024
fe863b3
reformat file with ruff
Sheepsta300 Sep 11, 2024
28fb0e7
change return type of function
Sheepsta300 Sep 11, 2024
61a815f
Update class to use new v3 `pydantic` validation methods
Sheepsta300 Oct 4, 2024
2afbff5
Add unit tests and required dependencies
Sheepsta300 Oct 8, 2024
b1809ea
Add docs and lint files
Sheepsta300 Oct 8, 2024
ef328e7
Add missing headers to docs and update attributes in class
Sheepsta300 Oct 8, 2024
e8b1415
Merge branch 'langchain-ai:master' into feature/content_safety
Sheepsta300 Oct 8, 2024
7bc9d2a
Add remaining missing headers according to CI
Sheepsta300 Oct 8, 2024
a2c4582
Merge branch 'feature/content_safety' of https://github.com/Sheepsta3…
Sheepsta300 Oct 8, 2024
ac350e3
Rearrange headers to try fix CI error
Sheepsta300 Oct 8, 2024
3fb48a5
Rearrange headers
Sheepsta300 Oct 8, 2024
1f30d14
Change Tool Functions to Tool functions
Sheepsta300 Oct 8, 2024
e5fb363
Change order of cells
Sheepsta300 Oct 8, 2024
4915fe0
Add outputs to docs
Sheepsta300 Oct 15, 2024
71ae221
Add suggested changes to guide and class code
Sheepsta300 Dec 10, 2024
75bcf2a
Lint file
Sheepsta300 Dec 10, 2024
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
@@ -0,0 +1,151 @@
from __future__ import annotations
kristapratico marked this conversation as resolved.
Show resolved Hide resolved

import logging
import os
from typing import Any, Dict, Optional

from langchain_core.callbacks import CallbackManagerForToolRun
from langchain_core.tools import BaseTool

logger = logging.getLogger(__name__)


class AzureContentSafetyTextTool(BaseTool):
"""
A tool that interacts with the Azure AI Content Safety API.

This tool queries the Azure AI Content Safety API to analyze text for harmful
content and identify sentiment. It requires an API key and endpoint,
kristapratico marked this conversation as resolved.
Show resolved Hide resolved
which can be set up as described in the following guide:

https://learn.microsoft.com/python/api/overview/azure/ai-contentsafety-readme?view=azure-python

Attributes:
content_safety_key (str):
The API key used to authenticate requests with Azure Content Safety API.
content_safety_endpoint (str):
The endpoint URL for the Azure Content Safety API.
content_safety_client (Any):
An instance of the Azure Content Safety Client used for making API
requests.

Methods:
_sentiment_analysis(text: str) -> Dict:
Analyzes the provided text to assess its sentiment and safety,
returning the analysis results.

_run(query: str,
run_manager: Optional[CallbackManagerForToolRun] = None) -> str:
Uses the tool to analyze the given query and returns the result.
Raises a RuntimeError if an exception occurs.
kristapratico marked this conversation as resolved.
Show resolved Hide resolved
"""

content_safety_key: str = "" #: :meta private:
content_safety_endpoint: str = "" #: :meta private:
content_safety_client: Any #: :meta private:

name: str = "azure_content_safety_tool"
description: str = (
"A wrapper around Azure AI Content Safety. "
"Useful for when you need to identify the sentiment of text and whether"
" or not a text is harmful."
"Input must be text (str)."
)

def __init__(
kristapratico marked this conversation as resolved.
Show resolved Hide resolved
self,
*,
content_safety_key: Optional[str] = None,
content_safety_endpoint: Optional[str] = None,
) -> None:
"""
Initialize the AzureContentSafetyTextTool with the given API key and endpoint.

If not provided, the API key and endpoint are fetched from environment
variables.

Args:
content_safety_key (Optional[str]):
The API key for Azure Content Safety API. If not provided, it will
be fetched from the environment variable 'CONTENT_SAFETY_API_KEY'.
content_safety_endpoint (Optional[str]):
The endpoint URL for Azure Content Safety API. If not provided, it
will be fetched from the environment variable
'CONTENT_SAFETY_ENDPOINT'.

Raises:
ImportError: If the 'azure-ai-contentsafety' package is not installed.
ValueError: If API key or endpoint is not provided and environment
variables are missing.
"""
content_safety_key = content_safety_key or os.environ["CONTENT_SAFETY_API_KEY"]
content_safety_endpoint = (
content_safety_endpoint or os.environ["CONTENT_SAFETY_ENDPOINT"]
)
try:
import azure.ai.contentsafety as sdk
from azure.core.credentials import AzureKeyCredential

content_safety_client = sdk.ContentSafetyClient(
endpoint=content_safety_endpoint,
credential=AzureKeyCredential(content_safety_key),
)

except ImportError:
raise ImportError(
"azure-ai-contentsafety is not installed. "
"Run `pip install azure-ai-contentsafety` to install."
)
super().__init__(
content_safety_key=content_safety_key,
content_safety_endpoint=content_safety_endpoint,
content_safety_client=content_safety_client,
)

def _sentiment_analysis(self, text: str) -> Dict:
kristapratico marked this conversation as resolved.
Show resolved Hide resolved
"""
Perform sentiment analysis on the provided text.

This method uses the Azure Content Safety Client to analyze the text and
determine its sentiment and safety categories.

Args:
text (str): The text to be analyzed.

Returns:
Dict: The analysis results containing sentiment and safety categories.
"""
from azure.ai.contentsafety.models import AnalyzeTextOptions

request = AnalyzeTextOptions(text=text)
response = self.content_safety_client.analyze_text(request)
result = response.categories_analysis
return result
kristapratico marked this conversation as resolved.
Show resolved Hide resolved

def _run(
self,
query: str,
run_manager: Optional[CallbackManagerForToolRun] = None,
) -> Dict:
kristapratico marked this conversation as resolved.
Show resolved Hide resolved
"""
Analyze the given query using the tool.

This method calls `_sentiment_analysis` to process the query and returns
the result. It raises a RuntimeError if an exception occurs during
analysis.

Args:
query (str): The query text to be analyzed.
run_manager (Optional[CallbackManagerForToolRun], optional):
A callback manager for tracking the tool run. Defaults to None.

Returns:
str: The result of the sentiment analysis.

Raises:
RuntimeError: If an error occurs while running the tool.
"""
try:
return self._sentiment_analysis(query)
except Exception as e:
raise RuntimeError(f"Error while running AzureContentSafetyTextTool: {e}")
Loading