Skip to content
#

llm-security

Here are 69 public repositories matching this topic...

Evaluation of Google's Instruction Tuned Gemma-2B, an open-source Large Language Model (LLM). Aimed at understanding the breadth of the model's knowledge, its reasoning capabilities, and adherence to ethical guardrails, this project presents a systematic assessment across a diverse array of domains.

  • Updated Feb 26, 2024
  • Jupyter Notebook

PurPaaS is an innovative open-source security testing platform that implements purple teaming (combined red and blue team approaches) to evaluate local LLM models through Ollama. By orchestrating autonomous agents, PurPaaS provides comprehensive security assessment of locally deployed AI models.

  • Updated Nov 4, 2024
  • Python

Improve this page

Add a description, image, and links to the llm-security topic page so that developers can more easily learn about it.

Curate this topic

Add this topic to your repo

To associate your repository with the llm-security topic, visit your repo's landing page and select "manage topics."

Learn more