Skip to content
#

ai-policy

Here are 20 public repositories matching this topic...

Agent orchestration & security template featuring MCP tool building, agent2agent workflows, mechanistic interpretability on sleeper agents, and agent integration via DLL injection and CLI wrappers.

  • Updated Feb 4, 2026
  • Rust

SpecGuard is a command-line tool that turns AI safety policies and behavioral guidelines into executable tests. Think of it as unit testing for your AI's output. Instead of trusting that your AI will follow the rules defined in a document, SpecGuard enforces them.

  • Updated Jan 21, 2026
  • Python

🤖 AI-HPP-2025: Human–Machine Partnership Standard After months of collaborative work with Claude, Gemini, and ChatGPT, I'm publishing a proposal for ethical framework for autonomous AI systems. Key principles: - "Engineering Hack" — find the third way where everyone lives - AI as Partner, not tool - Evidence Vault — black box for AI decisions

  • Updated Feb 4, 2026

APEX (Action Policy EXecution) is a minimal, external execution boundary for AI systems. It evaluates declared agent intent against explicit, operator-defined policy before execution, enabling deterministic, inspectable control without relying on in-model guardrails.

  • Updated Dec 17, 2025
  • JavaScript

Practical and research-oriented exploration of ethics, responsibility, and governance for AI in software engineering. Policy frameworks, case studies, assessment tools and actionable guidance for responsible AI adoption in engineering teams. Week 07 assignment for AI & SE learning track.

  • Updated Dec 24, 2025
  • Jupyter Notebook

Improve this page

Add a description, image, and links to the ai-policy topic page so that developers can more easily learn about it.

Curate this topic

Add this topic to your repo

To associate your repository with the ai-policy topic, visit your repo's landing page and select "manage topics."

Learn more