AI-driven Threat modeling-as-a-Code (TaaC-AI)
-
Updated
Jun 29, 2025 - HTML
AI-driven Threat modeling-as-a-Code (TaaC-AI)
An open-source knowledge base of defensive countermeasures to protect AI/ML systems. Features interactive views and maps defenses to known threats from frameworks like MITRE ATLAS, MAESTRO, and OWASP.
LLM Platform Security: Applying a Systematic Evaluation Framework to OpenAI's ChatGPT Plugins
MER is a software that identifies and highlights manipulative communication in text from human conversations and AI-generated responses. MER benchmarks language models for manipulative expressions, fostering development of transparency and safety in AI. It also supports manipulation victims by detecting manipulative patterns in human communication.
Personal Portfolio Website
🔍 Analyze security vulnerabilities in CLI-based LLM deployments, drawing insights from 95 peer-reviewed sources to enhance AI tool safety.
🛡️ AI Firewall with 87 detection engines | Topological Data Analysis | Sheaf Theory | Post-Quantum | Protect LLMs from jailbreaks, injections & adversarial attacks
Add a description, image, and links to the llm-security topic page so that developers can more easily learn about it.
To associate your repository with the llm-security topic, visit your repo's landing page and select "manage topics."