Skip to content
#

llm-security

Here are 186 public repositories matching this topic...

Evaluation of Google's Instruction Tuned Gemma-2B, an open-source Large Language Model (LLM). Aimed at understanding the breadth of the model's knowledge, its reasoning capabilities, and adherence to ethical guardrails, this project presents a systematic assessment across a diverse array of domains.

  • Updated Feb 26, 2024
  • Jupyter Notebook

PromptSniffer is a security auditing tool designed for authorized penetration testing and corporate DLP monitoring. It captures and monitors prompts sent to Large Language Models (ChatGPT, Claude, Gemini, etc.) across your entire network, providing real-time email alerts and comprehensive logging.

  • Updated Nov 10, 2025
  • Python

Proof of Concept (PoC) demonstrating prompt injection vulnerability in AI code assistants (like Copilot) using hidden Unicode characters within instruction files (copilot-instructions.md). Highlights risks of using untrusted instruction templates. For educational/research purposes only.

  • Updated May 10, 2025

Improve this page

Add a description, image, and links to the llm-security topic page so that developers can more easily learn about it.

Curate this topic

Add this topic to your repo

To associate your repository with the llm-security topic, visit your repo's landing page and select "manage topics."

Learn more