Open Source Reliability Harness: Make your agents follow rules. One line of code to enforce, trace, and improve.
-
Updated
May 14, 2026 - Python
Open Source Reliability Harness: Make your agents follow rules. One line of code to enforce, trace, and improve.
The left hemisphere. Frameworks, logic, and certainty architecture. Home of FSVE, AION, LAV, ASL, GENESIS, TOPOS, and 60+ epistemically validated frameworks built to make AI systems reliable, not just capable.
Official Technical Stack & Economic Engine for the NUPA Framework. Authored by Brandon Anthony Bedard (Nov 2025). Featuring the 40/40/20 Recursive Reinvestment Model and FASL Protocol
Artificial Intelligence Regulation Interface & Agreements
Curated dataset and tools for tracking global AI legislation — US federal, state, and international frameworks.
The standard protocol for defining runtime guardrails for your enterprise agents with a mission of trustworthy and reliable agentic systems 🛡️
SpecGuard is a command-line tool that turns AI safety policies and behavioral guidelines into executable tests. Think of it as unit testing for your AI's output. Instead of trusting that your AI will follow the rules defined in a document, SpecGuard enforces them.
AI-HPP-Standard: an inspection-ready architecture for accountable AI systems. Vendor-neutral. Audit-ready. High-risk gated. Developed via structured multi-model orchestration with human oversight. Designed to support emerging international AI governance.
🚪 Governance gate for AI agents. Enforce policies before deployment: request contracts, operational safeguards, identity boundaries, action budgets. CLI tool with YAML config, JSON/text output, CI/CD integration.
Public toolkit helping city teams build municipal AI governance workflows, review paths, templates, prompts, and evaluation checks.
AI-assisted onboarding for shipping AI policy as code instead of as a PDF. Templates, discovery walkthroughs, agents, and drift-loop automation. Sister project to Git-Organized.
AI Governance artifacts applying the NIST AI RMF framework — Risk Register, Acceptable Use Policy & Vendor Intake Questionnaire for LLM deployments.
🛡️ Enforce AI behavior guidelines with SpecGuard, a tool that turns policies into executable tests for reliable and scalable AI output management.
Add a description, image, and links to the ai-policy topic page so that developers can more easily learn about it.
To associate your repository with the ai-policy topic, visit your repo's landing page and select "manage topics."