-
Updated
Aug 28, 2025 - HTML
llm-security
Here are 8 public repositories matching this topic...
🔍 Analyze security vulnerabilities in CLI-based LLM deployments, drawing insights from 95 peer-reviewed sources to enhance AI tool safety.
-
Updated
Dec 13, 2025 - HTML
Personal Portfolio Website
-
Updated
Nov 26, 2025 - HTML
🛡️ AI Firewall with 87 detection engines | Topological Data Analysis | Sheaf Theory | Post-Quantum | Protect LLMs from jailbreaks, injections & adversarial attacks
-
Updated
Dec 13, 2025 - HTML
MER is a software that identifies and highlights manipulative communication in text from human conversations and AI-generated responses. MER benchmarks language models for manipulative expressions, fostering development of transparency and safety in AI. It also supports manipulation victims by detecting manipulative patterns in human communication.
-
Updated
Aug 3, 2024 - HTML
LLM Platform Security: Applying a Systematic Evaluation Framework to OpenAI's ChatGPT Plugins
-
Updated
Jul 29, 2024 - HTML
An open-source knowledge base of defensive countermeasures to protect AI/ML systems. Features interactive views and maps defenses to known threats from frameworks like MITRE ATLAS, MAESTRO, and OWASP.
-
Updated
Dec 12, 2025 - HTML
AI-driven Threat modeling-as-a-Code (TaaC-AI)
-
Updated
Jun 29, 2025 - HTML
Improve this page
Add a description, image, and links to the llm-security topic page so that developers can more easily learn about it.
Add this topic to your repo
To associate your repository with the llm-security topic, visit your repo's landing page and select "manage topics."