Real-time ML agent that detects sensitive content in video streams for classroom, parental, and enterprise safety.
-
Updated
Nov 27, 2025 - Python
Real-time ML agent that detects sensitive content in video streams for classroom, parental, and enterprise safety.
3alaKifi is a distributed and collaborative desktop application for event management, designed to help users plan, book, and personalize their events. It provides a centralized and intelligent platform that simplifies the entire event organization process through an intuitive interface and smart support, making event planning more efficient.
A new package is designed to analyze user inputs related to avoiding negative or unwelcome appearances on a Louis Rossmann video. It processes the text input to identify key factors or common pitfalls
A simple toxicity detector.
Production-grade LinkedIn post restyling API with safety and quality checks.
Python-based background service for AI-powered content verification, moderation, and quality control in NoBullFit.
Analyzes content delisting trends due to copyright infringement. This Python project uses data science techniques via Jupyter Notebooks to explore patterns and insights from content removal requests and legal disputes, supporting research into digital content governance and intellectual property rights.
Mobile App realized with ReactNative framework that interacts with Microsoft Azure Services.
Checking sightEngine images's API
ToxiGuard AI is a browser extension that detects and censors toxic language in real-time using TensorFlow.js. It offers fine-grained controls, visual feedback, auto-censor, adjustable sensitivity, and respects user privacy.
PT-BR offensive language detection dataset with 10k balanced texts from social media
A multilingual text analysis system that performs sentiment analysis and toxicity detection with detoxified text generation.
profanity checker text moderation
A Python-based tool for detecting adult content in images using machine learning APIs and computer vision techniques. This tool is designed for content moderation, parental controls, and maintaining safe digital environments.
Clean up your platform with AI-powered toxicity detection. This Flask-based REST API uses Detoxify & PyTorch to detect abusive, hateful, or toxic content in real time.
Sovereign Trust Fabric
This model is a fine-tuned version of cardiffnlp/twitter-roberta-base-sentiment on the dataset [Sp1786/multiclass-sentiment-analysis-dataset] (https://huggingface.co/datasets/Sp1786/multiclass-sentiment-analysis-dataset). The text classification task in this model is based on 3 sentiment labels.
This project uses a synthetic dataset created for demonstration purposes to ensure privacy and safety.
🛡️ AI-powered Chinese cyberbullying detection with explainable AI, LINE Bot, GPU acceleration. 中文網路霸凌防治系統
Multimodal risk content detection system (vision and NLP).
Add a description, image, and links to the content-moderation topic page so that developers can more easily learn about it.
To associate your repository with the content-moderation topic, visit your repo's landing page and select "manage topics."