the LLM vulnerability scanner
-
Updated
Dec 12, 2025 - Python
the LLM vulnerability scanner
NeMo Guardrails is an open-source toolkit for easily adding programmable guardrails to LLM-based conversational systems.
🐢 Open-Source Evaluation & Testing library for LLM Agents
A.I.G (AI-Infra-Guard) is a comprehensive, intelligent, and easy-to-use AI Red Teaming platform developed by Tencent Zhuque Lab.
The Security Toolkit for LLM Interactions
Agentic LLM Vulnerability Scanner / AI red teaming kit 🧪
OWASP Top 10 for Large Language Model Apps (Part of the GenAI Security Project)
A security scanner for your LLM agentic workflows
An easy-to-use Python framework to generate adversarial jailbreak prompts.
Papers and resources related to the security and privacy of LLMs 🤖
⚡ Vigil ⚡ Detect prompt injections, jailbreaks, and other potentially risky Large Language Model (LLM) inputs
This repository provides a benchmark for prompt injection attacks and defenses
Experimental tools to backdoor large language models by re-writing their system prompts at a raw parameter level. This allows you to potentially execute offline remote code execution without running any actual code on the victim's machine or thwart LLM-based fraud/moderation systems.
Framework for testing vulnerabilities of large language models (LLM).
The first open-source, customizable AI guardrails with user-defined scanners and custom model training support. It protects the entire AI inference pipeline—including prompts, models, agents, and outputs. Redefining runtime AI security for enterprise AI-powered applications.
The fastest Trust Layer for AI Agents
Whistleblower is a offensive security tool for testing against system prompt leakage and capability discovery of an AI application exposed through API. Built for AI engineers, security researchers and folks who want to know what's going on inside the LLM-based app they use daily
Ultra-fast, low latency LLM prompt injection/jailbreak detection ⛓️
Framework for LLM evaluation, guardrails and security
An Execution Isolation Architecture for LLM-Based Agentic Systems
Add a description, image, and links to the llm-security topic page so that developers can more easily learn about it.
To associate your repository with the llm-security topic, visit your repo's landing page and select "manage topics."