Agentic LLM Vulnerability Scanner / AI red teaming kit
-
Updated
Nov 18, 2024 - Python
Agentic LLM Vulnerability Scanner / AI red teaming kit
Ultra-fast, low latency LLM prompt injection/jailbreak detection ⛓️
The fastest && easiest LLM security guardrails for AI Agents and applications.
LMAP (large language model mapper) is like NMAP for LLM, is an LLM Vulnerability Scanner and Zero-day Vulnerability Fuzzer.
User prompt attack detection system
Exposing Jailbreak Vulnerabilities in LLM Applications with ARTKIT
Example of running last_layer with FastAPI on vercel
Add a description, image, and links to the llm-guardrails topic page so that developers can more easily learn about it.
To associate your repository with the llm-guardrails topic, visit your repo's landing page and select "manage topics."