CodeGate: Security, Workspaces and Multiplexing for AI Agentic Frameworks
-
Updated
Jun 5, 2025 - Python
CodeGate: Security, Workspaces and Multiplexing for AI Agentic Frameworks
Here Comes the AI Worm: Preventing the Propagation of Adversarial Self-Replicating Prompts Within GenAI Ecosystems
CTF challenges designed and implemented in machine learning applications
An interactive CLI application for interacting with authenticated Jupyter instances.
A collection list for Large Language Model (LLM) Watermark
Powerful LLM Query Framework with YAML Prompt Templates. Made for Automation
CyberBrain_Model is an advanced AI project designed for fine-tuning the model `DeepSeek-R1-Distill-Qwen-14B` specifically for cyber security tasks.
🤯 AI Security EXPOSED! Live Demos Showing Hidden Risks of 🤖 Agentic AI Flows: 💉Prompt Injection, ☣️ Data Poisoning. Watch the recorded session:
[COLM 2025] JailDAM: Jailbreak Detection with Adaptive Memory for Vision-Language Model
Securing LLM's Against Top 10 OWASP Large Language Model Vulnerabilities 2024
This repository is the official implementation of the paper "ASSET: Robust Backdoor Data Detection Across a Multiplicity of Deep Learning Paradigms." ASSET achieves state-of-the-art reliability in detecting poisoned samples in end-to-end supervised learning/ self-supervised learning/ transfer learning.
It is a pure front-end tool for testing the security boundaries of large language models, helping researchers to find and fix potential security vulnerabilities and improve the security and reliability of AI systems.
AI runtime inventory: discover shadow AI, trace LLM calls
CyberBrain is an advanced AI project designed specifically for training artificial intelligence models on devices with limited hardware capabilities.
AI Red Team & Blue Team Tips & Tricks!
This repository demonstrates a variety of **MCP Poisoning Attacks** affecting real-world AI agent workflows.
This repo contains reference implementations, tutorials, samples, and documentation for working with Bosch AIShield
LLM Security Project with Llama Guard
This repository is for Red Teamers, security researchers, AI enthusiasts, and students to learn about adversarial attacks on AI/LLM systems. It is strictly for educational use, and the authors disclaim responsibility for any misuse.
An intentionally vulnerable AI chatbot to learn and practice AI Security.
Add a description, image, and links to the aisecurity topic page so that developers can more easily learn about it.
To associate your repository with the aisecurity topic, visit your repo's landing page and select "manage topics."