🐙 Team Agents unifica 82 especialistas en IA para resolver desafíos con chat inteligente, analista de requisitos y subida de documentos. Plataforma futurista y modular.
-
Updated
Dec 14, 2025 - Python
🐙 Team Agents unifica 82 especialistas en IA para resolver desafíos con chat inteligente, analista de requisitos y subida de documentos. Plataforma futurista y modular.
🔍 Analyze security vulnerabilities in CLI-based LLM deployments, drawing insights from 95 peer-reviewed sources to enhance AI tool safety.
🛡️ Explore tools for securing Large Language Models, uncovering their strengths and weaknesses in the realm of offensive and defensive security.
🔍 Discover vulnerabilities in LLMs with garak, a tool that probes for weaknesses like hallucination, data leakage, and misinformation effectively.
AI security co-pilot skill for Claude Code - identify, test, and fix vulnerabilities in LLM-powered applications
MER is a software that identifies and highlights manipulative communication in text from human conversations and AI-generated responses. MER benchmarks language models for manipulative expressions, fostering development of transparency and safety in AI. It also supports manipulation victims by detecting manipulative patterns in human communication.
NeMo Guardrails is an open-source toolkit for easily adding programmable guardrails to LLM-based conversational systems.
The first open-source, customizable AI guardrails with user-defined scanners and custom model training support. It protects the entire AI inference pipeline—including prompts, models, agents, and outputs. Redefining runtime AI security for enterprise AI-powered applications.
Bidirectional LLM security firewall providing risk reduction (not complete protection) for human/LLM interfaces. Hexagonal architecture with multi-layer validation of inputs, outputs, memory and tool state. Beta status. ~528 KB wheel, optional ML guards.
🛡️ AI Firewall with 87 detection engines | Topological Data Analysis | Sheaf Theory | Post-Quantum | Protect LLMs from jailbreaks, injections & adversarial attacks
the LLM vulnerability scanner
Armorly is an open-source Chrome extension that blocks intrusive AI-native ads and sponsored content from all major chatbots (like ChatGPT, Grok, and Perplexity) and provides essential, in-browser hidden prompt injection security.
Security scanner for LLM/RAG applications - Test for prompt injection, jailbreaks, PII leakage, hallucinations & more
The Logic Firewall for AI Agents. Prevent infinite loops, token bombing, critical vulnerabilities and more before deployment.
An open-source knowledge base of defensive countermeasures to protect AI/ML systems. Features interactive views and maps defenses to known threats from frameworks like MITRE ATLAS, MAESTRO, and OWASP.
Multi-language security scanner with 64 analyzers + AI Agent Security. NEW: React2Shell CVE-2025-55182 detection (CVSS 10.0). Scan Python, JS, Go, Rust, Docker, Terraform, MCP & more. 11,500+ downloads. AGPL-3.0.
A Trustworthy and Secure Conversational Agent for Mental Healthcare
A.I.G (AI-Infra-Guard) is a comprehensive, intelligent, and easy-to-use AI Red Teaming platform developed by Tencent Zhuque Lab.
Out-Of-Tree Llama Stack Eval Provider for Red Teaming LLM Systems with Garak
Whispers in the Machine: Confidentiality in Agentic Systems
Add a description, image, and links to the llm-security topic page so that developers can more easily learn about it.
To associate your repository with the llm-security topic, visit your repo's landing page and select "manage topics."