Security scanner for code and logs of AI-powered applications
-
Updated
Jul 14, 2025 - Python
Security scanner for code and logs of AI-powered applications
Sovereign AI Infrastructure for Enterprise RAG. Zero-Trust PII Sanitization, Local Inference (CPU-optimized), and Docker-ready architecture.
Comprehensive LLM AI Model protection - cybersecurity toolset aligned to addressing OWASP vulnerabilities - https://genai.owasp.org/llm-top-10/
Neovim integration for CodeGate
Stop prompt injections in 20ms. The safety toolkit every LLM app needs. No API keys, no complex setup, just `pip install llm-guard` and you're protected.
Professional AI Security Assurance portfolio demonstrating model supply-chain security, LLM red teaming, static analysis, SBOM validation, risk classification, and governance-aligned AI safety workflows.
Trained Without My Consent (TraWiC): Detecting Code Inclusion In Language Models Trained on Code
Simulating prompt injection and guardrail bypass across chained LLMs in security decision pipelines.
Security scanner for LLM/RAG applications - Test for prompt injection, jailbreaks, PII leakage, hallucinations & more
Evaluation of Google's Instruction Tuned Gemma-2B, an open-source Large Language Model (LLM). Aimed at understanding the breadth of the model's knowledge, its reasoning capabilities, and adherence to ethical guardrails, this project presents a systematic assessment across a diverse array of domains.
PromptSniffer is a security auditing tool designed for authorized penetration testing and corporate DLP monitoring. It captures and monitors prompts sent to Large Language Models (ChatGPT, Claude, Gemini, etc.) across your entire network, providing real-time email alerts and comprehensive logging.
Test your LLM system prompts against 287 real-world attack vectors including prompt injection, jailbreaks, and data leaks.
Prompt Injection Detection in LLaMA-based Chatbots using LLM Guard
A living map of the AI agent security ecosystem.
The LLM Defense Framework enhances large language model security through post-processing defenses and statistical guarantees based on one-class SVM. It combines advanced sampling methods with adaptive policy updates and comprehensive evaluation metrics, providing researchers and practitioners with tools to build more secure AI systems.
The official Nodejs library for the OpenGuardrails API
Add a description, image, and links to the llm-security topic page so that developers can more easily learn about it.
To associate your repository with the llm-security topic, visit your repo's landing page and select "manage topics."