Skip to content
#

ai-policy

Here are 14 public repositories matching this topic...

AION-BRAIN

The left hemisphere. Frameworks, logic, and certainty architecture. Home of FSVE, AION, LAV, ASL, GENESIS, TOPOS, and 60+ epistemically validated frameworks built to make AI systems reliable, not just capable.

  • Updated Apr 15, 2026
  • Python

Official Technical Stack & Economic Engine for the NUPA Framework. Authored by Brandon Anthony Bedard (Nov 2025). Featuring the 40/40/20 Recursive Reinvestment Model and FASL Protocol

  • Updated May 14, 2026
  • Python

SpecGuard is a command-line tool that turns AI safety policies and behavioral guidelines into executable tests. Think of it as unit testing for your AI's output. Instead of trusting that your AI will follow the rules defined in a document, SpecGuard enforces them.

  • Updated Jan 21, 2026
  • Python
AI-HPP-Standard

AI-HPP-Standard: an inspection-ready architecture for accountable AI systems. Vendor-neutral. Audit-ready. High-risk gated. Developed via structured multi-model orchestration with human oversight. Designed to support emerging international AI governance.

  • Updated Apr 20, 2026
  • Python

Improve this page

Add a description, image, and links to the ai-policy topic page so that developers can more easily learn about it.

Curate this topic

Add this topic to your repo

To associate your repository with the ai-policy topic, visit your repo's landing page and select "manage topics."

Learn more