Security scanner for code and logs of AI-powered applications
-
Updated
Jul 14, 2025 - Python
Security scanner for code and logs of AI-powered applications
Test your LLM system prompts against 287 real-world attack vectors including prompt injection, jailbreaks, and data leaks.
Comprehensive LLM AI Model protection - cybersecurity toolset aligned to addressing OWASP vulnerabilities - https://genai.owasp.org/llm-top-10/
Neovim integration for CodeGate
Stop prompt injections in 20ms. The safety toolkit every LLM app needs. No API keys, no complex setup, just `pip install llm-guard` and you're protected.
Professional AI Security Assurance portfolio demonstrating model supply-chain security, LLM red teaming, static analysis, SBOM validation, risk classification, and governance-aligned AI safety workflows.
Sovereign AI Infrastructure for Enterprise RAG. Zero-Trust PII Sanitization, Local Inference (CPU-optimized), and Docker-ready architecture.
Trained Without My Consent (TraWiC): Detecting Code Inclusion In Language Models Trained on Code
Simulating prompt injection and guardrail bypass across chained LLMs in security decision pipelines.
Security scanner for LLM/RAG applications - Test for prompt injection, jailbreaks, PII leakage, hallucinations & more
Evaluation of Google's Instruction Tuned Gemma-2B, an open-source Large Language Model (LLM). Aimed at understanding the breadth of the model's knowledge, its reasoning capabilities, and adherence to ethical guardrails, this project presents a systematic assessment across a diverse array of domains.
PromptSniffer is a security auditing tool designed for authorized penetration testing and corporate DLP monitoring. It captures and monitors prompts sent to Large Language Models (ChatGPT, Claude, Gemini, etc.) across your entire network, providing real-time email alerts and comprehensive logging.
Prompt Injection Detection in LLaMA-based Chatbots using LLM Guard
A living map of the AI agent security ecosystem.
Proof of Concept (PoC) demonstrating prompt injection vulnerability in AI code assistants (like Copilot) using hidden Unicode characters within instruction files (copilot-instructions.md). Highlights risks of using untrusted instruction templates. For educational/research purposes only.
Open-source enforcement layer for LLM safety and governance — ingress/egress evaluation, policy packs, verifier support, and multimodal protection.
Add a description, image, and links to the llm-security topic page so that developers can more easily learn about it.
To associate your repository with the llm-security topic, visit your repo's landing page and select "manage topics."