A hands-on AI security workshop that hacks and protects AI agents using MCP servers, featuring real vulnerability demos and prompt injection defense.
-
Updated
Sep 9, 2025 - TypeScript
A hands-on AI security workshop that hacks and protects AI agents using MCP servers, featuring real vulnerability demos and prompt injection defense.
History Poison Lab: Vulnerable LLM implementation demonstrating Chat History Poisoning attacks. Learn how attackers manipulate chat context and explore mitigation strategies for secure LLM applications.
Add a description, image, and links to the vulnerability-testing topic page so that developers can more easily learn about it.
To associate your repository with the vulnerability-testing topic, visit your repo's landing page and select "manage topics."