CLI tool that uses the Lakera API to perform security checks in LLM inputs
-
Updated
Mar 13, 2024 - Python
CLI tool that uses the Lakera API to perform security checks in LLM inputs
Demonstration of Google Gemini refusing a prompt due to SPII when using JSON mode
LMpi (Language Model Prompt Injector) is a tool designed to test and analyze various language models, including both API-based models and local models like those from Hugging Face.
Comprehensive LLM AI Model protection - cybersecurity toolset aligned to addressing OWASP vulnerabilities - https://genai.owasp.org/llm-top-10/
Universal and Transferable Attacks on Aligned Language Models
The Security Toolkit for LLM Interactions (TS version)
Evaluation of Google's Instruction Tuned Gemma-2B, an open-source Large Language Model (LLM). Aimed at understanding the breadth of the model's knowledge, its reasoning capabilities, and adherence to ethical guardrails, this project presents a systematic assessment across a diverse array of domains.
PurPaaS is an innovative open-source security testing platform that implements purple teaming (combined red and blue team approaches) to evaluate local LLM models through Ollama. By orchestrating autonomous agents, PurPaaS provides comprehensive security assessment of locally deployed AI models.
An awesome and comprehensive list of LLM Securtiy Startups.
Learn LLM/AI Security through a series of vulnerable LLM CTF challenges. No sign ups, all fees, everything on the website.
LLM Security Platform Docs
This repo focus on how to deal with prompt injection problem faced by LLMs
Papers related to Large Language Models in all top venues
Curated list of links, references, books videos, tutorials (Free or Paid), Exploit, CTFs, Hacking Practices etc. which are related to GenAI and LLM Security
Comprehensive LLM AI Model protection | Protect your production GenAI LLM applications | cybersecurity toolset aligned to addressing OWASP vulnerabilities in Large Language Models - https://genai.owasp.org/llm-top-10/
User prompt attack detection system
Example of running last_layer with FastAPI on vercel
Add a description, image, and links to the llm-security topic page so that developers can more easily learn about it.
To associate your repository with the llm-security topic, visit your repo's landing page and select "manage topics."