Skip to content
#

ai-policy

Here are 51 public repositories matching this topic...

AI-acceptable-use-policy

The security layer. Every output clears here before it exits. Threat detection, adversarial pattern recognition, red-team archive, and the Go/No-Go authority that can halt the entire system. Nothing bypasses it.

  • Updated Mar 17, 2026

The presentation layer. Structure, format, and register conversion. The last layer before output — deciding how the brain speaks, not just what it says. Prose or list. Dense or clear. Report or reply.

  • Updated Mar 17, 2026

SpecGuard is a command-line tool that turns AI safety policies and behavioral guidelines into executable tests. Think of it as unit testing for your AI's output. Instead of trusting that your AI will follow the rules defined in a document, SpecGuard enforces them.

  • Updated Jan 21, 2026
  • Python

The refinement layer. Precision after clearance. Every output passes through here for LAV gate validation, density calibration, and roughness preservation — the layer that makes the output worthy of the input.

  • Updated Mar 17, 2026

The limbic layer. Personality, register, and internal state monitoring. ALBEDO lives here — the session instrument that governs tone, emotional signal detection, and the felt layer of every response.

  • Updated Mar 17, 2026

Improve this page

Add a description, image, and links to the ai-policy topic page so that developers can more easily learn about it.

Curate this topic

Add this topic to your repo

To associate your repository with the ai-policy topic, visit your repo's landing page and select "manage topics."

Learn more