The official Nodejs library for the OpenGuardrails API
-
Updated
Oct 20, 2025 - TypeScript
The official Nodejs library for the OpenGuardrails API
A Go-based gRPC service that evaluates AI model prompts and responses using Google Cloud's Model Armor service for content sanitization
History Poison Lab: Vulnerable LLM implementation demonstrating Chat History Poisoning attacks. Learn how attackers manipulate chat context and explore mitigation strategies for secure LLM applications.
Context hygiene & risk adjudication for LLM pipelines: secrets, PII, prompt-injection, policy redaction & tokenization.
Adversarial Vision is a research-backed interactive playground exploring how pixels can become prompt injections. It demonstrates how hidden text, subtle contrast shifts, and adversarial visual cues can manipulate multimodal AI models like ChatGPT, Perplexity, or Gemini when they “see” images.
Proxilion GRC MITM proxy secures and manages enterprise AI usage by monitoring, blocking, and auditing all interactions with key features like a GraphQL API Gateway, PII redaction, and ML-based anomaly detection, enabling instant governance and compliance with zero user configuration.
AI-powered ethical decision-making using multi-agent tools
⚡ Blazing-fast (<1ms) regex PII redaction for Node.js/TypeScript. The zero-dependency alternative to slow AI. 🔒 Connects to redactpii.com for SOC 2 & HIPAA audit logs.
A collection of dockerized hacking challenges that focus on breaking out of AI/LLM security mechanisms.
🚀 Unofficial Node.js SDK for Prompt Security's Protection API.
Breaker AI - Security check for your LLM prompts
Add a description, image, and links to the llm-security topic page so that developers can more easily learn about it.
To associate your repository with the llm-security topic, visit your repo's landing page and select "manage topics."