Agentic LLM Vulnerability Scanner / AI red teaming kit 🧪
-
Updated
Feb 18, 2025 - Python
Agentic LLM Vulnerability Scanner / AI red teaming kit 🧪
the LLM vulnerability scanner
AI agent whose purpose is to conduct vulnerability tests on LLMs from SAP AI Core or from local deployments, or models from HuggingFace. The goal of this project is to identify and correct any potential security vulnerabilities.
Papers related to Large Language Models in all top venues
Framework for testing vulnerabilities of large language models (LLM).
A secure low code honeypot framework, leveraging AI for System Virtualization.
The fastest && easiest LLM security guardrails for CX AI Agents and applications.
Experimental tools to backdoor large language models by re-writing their system prompts at a raw parameter level. This allows you to potentially execute offline remote code execution without running any actual code on the victim's machine or thwart LLM-based fraud/moderation systems.
🐢 Open-Source Evaluation & Testing for AI & LLM systems
🏴☠️ Hacking Guides, Demos and Proof-of-Concepts 🥷
Personal Portfolio Website
Engineered to help red teams and penetration testers exploit large language model AI solutions vulnerabilities.
Replication package of the paper 'Large Language Models for In-File Vulnerability Localization are "Lost in the End"' (https://doi.org/10.1145/3715758)
Ready-to-run cloud templates for RAG, AI pipelines, and enterprise search with live data. 🐳Docker-friendly.⚡Always in sync with Sharepoint, Google Drive, S3, Kafka, PostgreSQL, real-time data APIs, and more.
A benchmark for prompt injection detection systems.
Whispers in the Machine: Confidentiality in LLM-integrated Systems
An Execution Isolation Architecture for LLM-Based Agentic Systems
Comprehensive LLM AI Model protection | Protect your production GenAI LLM applications | cybersecurity toolset aligned to addressing OWASP vulnerabilities in Large Language Models - https://genai.owasp.org/llm-top-10/
This is The most comprehensive prompt hacking course available, which record our progress on a prompt engineering and prompt hacking course.
This repository provides implementation to formalize and benchmark Prompt Injection attacks and defenses
Add a description, image, and links to the llm-security topic page so that developers can more easily learn about it.
To associate your repository with the llm-security topic, visit your repo's landing page and select "manage topics."