LLM Platform Security: Applying a Systematic Evaluation Framework to OpenAI's ChatGPT Plugins
-
Updated
Jul 29, 2024 - HTML
LLM Platform Security: Applying a Systematic Evaluation Framework to OpenAI's ChatGPT Plugins
AI-driven Threat modeling-as-a-Code (TaaC-AI)
Personal Portfolio Website
An open-source knowledge base of defensive countermeasures to protect AI/ML systems. Features interactive views and maps defenses to known threats from frameworks like MITRE ATLAS, MAESTRO, and OWASP.
MER is a software that identifies and highlights manipulative communication in text from human conversations and AI-generated responses. MER benchmarks language models for manipulative expressions, fostering development of transparency and safety in AI. It also supports manipulation victims by detecting manipulative patterns in human communication.
🛡️ AI Firewall with 87 detection engines | Topological Data Analysis | Sheaf Theory | Post-Quantum | Protect LLMs from jailbreaks, injections & adversarial attacks
🔍 Analyze security vulnerabilities in CLI-based LLM deployments, drawing insights from 95 peer-reviewed sources to enhance AI tool safety.
Add a description, image, and links to the llm-security topic page so that developers can more easily learn about it.
To associate your repository with the llm-security topic, visit your repo's landing page and select "manage topics."