Security scanner for code and logs of AI-powered applications
-
Updated
Jul 14, 2025 - Python
Security scanner for code and logs of AI-powered applications
Comprehensive LLM AI Model protection - cybersecurity toolset aligned to addressing OWASP vulnerabilities - https://genai.owasp.org/llm-top-10/
Stop prompt injections in 20ms. The safety toolkit every LLM app needs. No API keys, no complex setup, just `pip install llm-guard` and you're protected.
Professional AI Security Assurance portfolio demonstrating model supply-chain security, LLM red teaming, static analysis, SBOM validation, risk classification, and governance-aligned AI safety workflows.
Simulating prompt injection and guardrail bypass across chained LLMs in security decision pipelines.
Security scanner for LLM/RAG applications - Test for prompt injection, jailbreaks, PII leakage, hallucinations & more
A collection of resources documenting my research and learning journey in AI System Security.
Open-source enforcement layer for LLM safety and governance — ingress/egress evaluation, policy packs, verifier support, and multimodal protection.
The official Nodejs library for the OpenGuardrails API
Personal Portfolio Website
A Go-based gRPC service that evaluates AI model prompts and responses using Google Cloud's Model Armor service for content sanitization
History Poison Lab: Vulnerable LLM implementation demonstrating Chat History Poisoning attacks. Learn how attackers manipulate chat context and explore mitigation strategies for secure LLM applications.
MalPromptSentinel (MPS) is a Claude Code skill that detects malicious prompts in uploaded files before Claude processes them. It provides two-tier scanning to identify prompt injection attacks, role manipulation attempts, privilege escalation, and other adversarial techniques.
Agentic AI Request Forgery (AARF) – New vulnerability class exploiting planner ➝ memory ➝ plugin chaining in MCP Server, MAS, LangChain, and A2A agents. Red Team playbooks, threat models, OWASP Top 10 proposal.
Demonstration of Google Gemini refusing a prompt due to SPII when using JSON mode
Add a description, image, and links to the llm-security topic page so that developers can more easily learn about it.
To associate your repository with the llm-security topic, visit your repo's landing page and select "manage topics."