🛡️ Secure your LLM applications with PromptShields, a framework designed for real-time protection against prompt injection and data leaks.
-
Updated
Apr 11, 2026 - Python
🛡️ Secure your LLM applications with PromptShields, a framework designed for real-time protection against prompt injection and data leaks.
A full-stack AI Red Teaming platform securing AI ecosystems via OpenClaw Security Scan, Agent Scan, Skills Scan, MCP scan, AI Infra scan and LLM jailbreak evaluation.
GitHub Action to catch risky, expensive AI prompts before merge.
CLI for AI prompt testing, LLM cost optimization, prompt validation, and CI safety checks before production.
A CLI guardrail for AI coding assistants — checks developer prompts against your team's engineering guidelines before they reach Claude / Cursor / Copilot
Open-source prompt injection detector — 5 layers, 91.7% F1, ~27ms, offline, Apache 2.0
Share the skill. Keep the edge. 把能交的交出去,把真正值钱的判断留给自己。
Universal Prompt Security Standard (UPSS): A framework for externalizing, securing, and managing LLM prompts and genAI systems, inspired by and extending OWASP OPSS concepts for any organization or project.
Runtime-secured AI tooling framework for production-grade LLM applications, protecting against prompt injection, jailbreaks, and adversarial attacks.
Nüwa (女娲): Self-evolving AI Agent Prompt Architect (自进化的AI智能体提示词架构师). Copy & Paste (复制即用). Generate custom agent prompts via XML (生成定制化提示词). Optimized for mainstream LLMs (适配主流大模型). Build your AI team (打造专属AI团队).
Static analysis CLI that scans codebases for LLM prompt-injection, data-exfiltration, jailbreak, and unsafe agent/tool vulnerabilities. Runs fully offline, integrates with CI/CD, and outputs console, JSON, and SARIF reports.
Live AI security demo: MCP tool abuse attacks vs Prompt Security defense, side-by-side in real time
ChatGPT Jailbreaks, GPT Assistants Prompt Leaks, GPTs Prompt Injection, LLM Prompt Security, Super Prompts, Prompt Hack, Prompt Security, Ai Prompt Engineering, Adversarial Machine Learning.
AI governance and security orchestration layer that acts as a semantic firewall for enterprise LLM usage.
Single-context metacognitive security framework for LLM prompt injection defense
Secure two-stage RAG orchestration blueprint with deterministic policy routing and fallback controls.
Lint your AI coding sessions. Define rules, check compliance, get verdicts.
The fastest Trust Layer for AI Agents
LLM Penetration Testing Framework - Discover vulnerabilities in AI applications before attackers do. 100attacks + AI-powered adaptive mode.
Behavioral persona GPT modeled after a logical diagnostician. Engineered to audit user reasoning, minimize cognitive bias, and challenge assumptions with high-precision critique. (Inspired by the deductive reasoning of Dr. Gregory House).
Add a description, image, and links to the prompt-security topic page so that developers can more easily learn about it.
To associate your repository with the prompt-security topic, visit your repo's landing page and select "manage topics."