Security scanner for code and logs of AI-powered applications
-
Updated
Jul 14, 2025 - Python
Security scanner for code and logs of AI-powered applications
Comprehensive LLM AI Model protection - cybersecurity toolset aligned to addressing OWASP vulnerabilities - https://genai.owasp.org/llm-top-10/
Stop prompt injections in 20ms. The safety toolkit every LLM app needs. No API keys, no complex setup, just `pip install llm-guard` and you're protected.
Professional AI Security Assurance portfolio demonstrating model supply-chain security, LLM red teaming, static analysis, SBOM validation, risk classification, and governance-aligned AI safety workflows.
Simulating prompt injection and guardrail bypass across chained LLMs in security decision pipelines.
Security scanner for LLM/RAG applications - Test for prompt injection, jailbreaks, PII leakage, hallucinations & more
A collection of resources documenting my research and learning journey in AI System Security.
🔍 Analyze security vulnerabilities in CLI-based LLM deployments, drawing insights from 95 peer-reviewed sources to enhance AI tool safety.
Personal Portfolio Website
Demonstration of Google Gemini refusing a prompt due to SPII when using JSON mode
Agentic AI Request Forgery (AARF) – New vulnerability class exploiting planner ➝ memory ➝ plugin chaining in MCP Server, MAS, LangChain, and A2A agents. Red Team playbooks, threat models, OWASP Top 10 proposal.
CLI tool that uses the Lakera API to perform security checks in LLM inputs
Open-source enforcement layer for LLM safety and governance — ingress/egress evaluation, policy packs, verifier support, and multimodal protection.
A Go-based gRPC service that evaluates AI model prompts and responses using Google Cloud's Model Armor service for content sanitization
MalPromptSentinel (MPS) is a Claude Code skill that detects malicious prompts in uploaded files before Claude processes them. It provides two-tier scanning to identify prompt injection attacks, role manipulation attempts, privilege escalation, and other adversarial techniques.
Add a description, image, and links to the llm-security topic page so that developers can more easily learn about it.
To associate your repository with the llm-security topic, visit your repo's landing page and select "manage topics."