🛠️ Remove AI patterns from writing with Stop Slop, a skill designed to guide Claude and other LLMs towards clearer, more authentic prose.
-
Updated
Feb 4, 2026
🛠️ Remove AI patterns from writing with Stop Slop, a skill designed to guide Claude and other LLMs towards clearer, more authentic prose.
🛡️ Protect students with ClassShield, an AI-driven content moderation tool that combines machine learning and human oversight for safe educational environments.
🔍 Detect AI-generated content offline, ensuring privacy with complete control over text, images, audio, and video.
📝 Transform unstructured text into structured, categorized feedback with StructuredSnip, ensuring clear and consistent communication.
🤖 Forward Telegram messages seamlessly with AI moderation to enhance safety, manage users effectively, and support multiple languages with ease.
🕵️♂️ Analyze and reverse engineer keyword filtering in large language models to enhance compliance and operational insights.
🚀 Enable accurate assessment of AI models with the RAIL Score Python SDK, promoting responsible and fair AI development effortlessly.
Event-driven microservices backend for Humane, a behavior-rewarding social platform. Implements CQRS with Kafka for eventual consistency, Elasticsearch for global search, and polyglot persistence, with ML-powered moderation pipelines for content safety.
Automated toxic comment detection system for online community moderation using NLP and machine learning text classification
🤬 🚫 Blasp is a profanity filter package for Laravel that helps detect and mask profane words in a given sentence. It offers a robust set of features for handling variations of offensive language, including substitutions, obscured characters, and doubled letters.
AI-powered customer service assistant with guardrails for safe, compliant interactions using an LLM and multiple detector models.
Humane is a real-time, privacy-first social platform built with React and TypeScript, featuring chat, video calls, and AI-powered content moderation.
Open-source ML-powered profanity filter with TensorFlow.js toxicity detection, leetspeak & Unicode obfuscation resistance. 21M+ ops/sec, 23 languages, React hooks, LRU caching. npm & PyPI.
Software and Resources for Mitigating Online Gender Based Violence in India
TinyBloggers is a news blog platform that allows admins to create and manage articles, while users can view and search for content. It includes features like OTP verification, content moderation, and user registration.
Built a machine learning model to classify TikTok videos as claims or opinions using NLP and engagement features. Performed feature engineering, text vectorization, and trained Random Forest and XGBoost models, achieving high recall to support content moderation.
A Python-based tool for detecting adult content in images using machine learning APIs and computer vision techniques. This tool is designed for content moderation, parental controls, and maintaining safe digital environments.
An AI-powered system for detecting harmful, toxic, or unsafe content using NLP techniques.
Middleware library for pydantic-ai agents - before/after hooks for guardrails, logging, rate limiting, PII redaction, content moderation, and input validation.
Add a description, image, and links to the content-moderation topic page so that developers can more easily learn about it.
To associate your repository with the content-moderation topic, visit your repo's landing page and select "manage topics."