The only production-ready open-source AI guardrails platform for enterprise AI applications with user-defined scanners and custom model training support.
-
Updated
Dec 19, 2025 - Python
The only production-ready open-source AI guardrails platform for enterprise AI applications with user-defined scanners and custom model training support.
NudeDetect is a Python-based tool for detecting nudity and adult content in images. This project combines the capabilities of the NudeNet library, EasyOCR for text detection, and the Better Profanity library for identifying offensive language in text.
Step-by-Step tutorial that teaches you how to use Azure Safety Content - the prebuilt AI service that helps ensure that content sent to user is filtered to safeguard them from risky or undesirable outcomes
An intelligent task management assistant built with .NET, Next.js, Microsoft Agent Framework, AG-UI protocol, and Azure OpenAI, demonstrating Clean Architecture and autonomous AI agent capabilities
🔍 Benchmark jailbreak resilience in LLMs with JailBench for clear insights and improved model defenses against jailbreak attempts.
Benchmark LLM jailbreak resilience across providers with standardized tests, adversarial mode, rich analytics, and a clean Web UI.
Content moderation (text and image) in a social network demo
Study Buddy is a user-friendly AI-powered web app that helps students generate safe, factual study notes and Q&A on any topic. It features user accounts, study history, and strong content safety filters—making learning interactive and secure.
A Chrome extension that uses Claude AI to protect users under 18 from inappropriate content by analyzing webpage content in real-time.
SentinelShield: Advanced AI content moderation combining Llama Prompt Guard 2, rule-based filtering, and real-time analysis. Protect your applications from harmful content, prompt injection attacks, and inappropriate material with sub-second response times.
profanity checker text moderation
Technical presentations with hands-on demos
Context hygiene & risk adjudication for LLM pipelines: secrets, PII, prompt-injection, policy redaction & tokenization.
Impact Analyzer is a web app that helps you detect toxicity and analyze nuance in your writing before publishing, ensuring your content is respectful, clear, and aligned with your intent.
Azure safety content example using python for text analysis
This tutorial demonstrates how to use the Google Cloud Natural Language API for text moderation. It provides a step-by-step guide to detecting and managing harmful content while promoting responsible AI practices.
promptshield 🛡️ : Tech & Social Media #Content-Safety
A 3-tier diagnostic application designed for hands-on learning about securing AI systems across identity, network, application, and content safety domains.
Real-time speech-to-text system with toxic content detection and filtering. Transcribes live audio using multiple ASR options while automatically detecting and masking harmful language.
Add a description, image, and links to the content-safety topic page so that developers can more easily learn about it.
To associate your repository with the content-safety topic, visit your repo's landing page and select "manage topics."