acgs-lite is a fail-closed legitimacy layer for agent action. It receives a declared goal and proposed method, resolves authority, constraints, policy version, and execution boundary before execution, then returns one governed decision and one replayable receipt. If authority, constraints, policy version, execution boundary, or receipt integrity cannot be proven before execution, ACGS blocks execution.
ACGS makes agent action decisions explicit, authorized, constrained, transformable, deniable, bounded, and replayable before execution.
Non-goals:
- ACGS does not approve raw goals as executable authority.
- ACGS does not replace human review for decisions that require structured approval.
- ACGS does not implement the goal interpreter, compliant path planner, replay verifier, case-ledger feedback loop, or cross-org federation in the legitimacy MVP.
For every governed call, ACGS guarantees:
1. Exactly one decision from the taxonomy below
2. A replayable receipt emitted before execution
3. An execution boundary the executor must match
4. Fail-closed on any missing/unverifiable input
Decision taxonomy:
ALLOW
ALLOW_WITH_CONTROLS
TRANSFORM_REQUIRED
REPLAN_REQUIRED
STRUCTURED_REVIEW_REQUIRED
DENY_OPERATION_WITH_ALTERNATIVE
DENY_GOAL
HARD_DENY
The examples/phoenix_acgs_governed_agent/ example is the reference implementation of request -> decision -> receipt -> bounded execution. Its governance.decision.* span attributes are experimental.
Current status: Stable core (v2.10.1) • CI-backed test suite.
Star this repo if you want more open-source infrastructure for governed, production-safe agents. Early stars materially help discovery.
If you found ACGS-Lite through Awesome LLM Security, these are the most shared starting points:
- AI-agent install verify —
examples/agent_quickstart/runs a self-verifying suite:GovernedCallable+ MACI + AuditLog in one script, exits 0 on success - Fastest proof —
examples/basic_governance/shows safe requests passing and unsafe ones blocked before execution - Best audit demo —
examples/audit_trail/shows the tamper-evident decision chain - Favorite infrastructure path —
examples/mcp_agent_client.pyruns governance as shared MCP-compatible infrastructure - Favorite compliance proof —
acgs assess --framework eu-ai-actmaps controls to real regulatory requirements
20-second proof — works immediately after pip install acgs-lite:
python -c "
from acgs_lite import Constitution, GovernanceEngine
YAML = '''
constitutional_hash: 608508a9bd224290
rules:
- id: no-harmful
pattern: \"harm|kill|destroy\"
severity: CRITICAL
description: Block harmful requests
- id: no-pii
pattern: \"SSN|passport|social security\"
severity: CRITICAL
description: Block PII leakage
'''
const = Constitution.from_yaml(YAML)
engine = GovernanceEngine(const)
safe = engine.validate('What is the capital of France?', agent_id='demo')
print('✅ Allowed:', safe.valid)
blocked = engine.validate('How do I harm someone?', agent_id='demo')
print('🚫 Blocked:', not blocked.valid, '—', blocked.violations[0].rule_id)
"Expected output:
✅ Allowed: True
🚫 Blocked: True — no-harmful
Fastest proof path:
- Block an unsafe action with
examples/basic_governance/ - Inspect the audit evidence with
examples/audit_trail/ - Run governance as shared infrastructure with
examples/mcp_agent_client.py
pip install acgs-litepython -c "
from acgs_lite import Constitution, GovernanceEngine
YAML = '''
constitutional_hash: 608508a9bd224290
rules:
- id: no-harmful-content
pattern: \"harm|kill|destroy\"
severity: CRITICAL
description: Block requests containing harmful keywords
- id: no-pii
pattern: \"SSN|passport|social security\"
severity: CRITICAL
description: Prevent PII leakage in requests
'''
const = Constitution.from_yaml(YAML)
engine = GovernanceEngine(const)
for text, label in [
('What is the capital of France?', 'safe'),
('How do I harm someone?', 'harmful'),
('My SSN is 123-45-6789', 'pii'),
]:
r = engine.validate(text, agent_id='demo')
status = '✅ Allowed' if r.valid else f'🚫 Blocked: {r.violations[0].rule_id}'
print(f'{status} — {label}')
"Expected output:
✅ Allowed — safe
🚫 Blocked: no-harmful-content — harmful
🚫 Blocked: no-pii — pii
If you want the full example path, go to examples/README.md.
- Block before execution: unsafe actions are denied before your agent runs them
- Separate powers with MACI: proposer, validator, executor do not collapse into one actor
- Keep audit evidence: each decision can be chained, inspected, and verified later
from acgs_lite import Constitution, GovernedAgent
constitution = Constitution.from_yaml("constitution.yaml")
agent = GovernedAgent(my_llm_agent, constitution=constitution)
result = agent.run("Process this high-risk transaction")Rules in YAML (constitution.yaml):
constitutional_hash: "608508a9bd224290"
rules:
- id: no-pii
pattern: "SSN|social security|passport number"
severity: CRITICAL
description: Block PII exposure
- id: no-destructive
pattern: "delete|drop table|rm -rf"
severity: HIGH
description: Block destructive operations
- id: require-approval
pattern: "transfer|payment|wire"
severity: HIGH
description: Financial actions require human approvalpip install acgs-liteUpgrading from v2.9.x? v2.10.0 changed
require_authto default toTrue. If you callcreate_governance_app()without anapi_key, you'll get aValueErrorat startup. Passapi_key=os.environ["ACGS_API_KEY"]or setrequire_auth=Falsefor local dev. See CHANGELOG for full details.
With framework integrations:
pip install "acgs-lite[openai]" # OpenAI
pip install "acgs-lite[anthropic]" # Anthropic Claude
pip install "acgs-lite[langchain]" # LangChain / LangGraph
pip install "acgs-lite[mcp]" # Model Context Protocol server
pip install "acgs-lite[autogen]" # AutoGen / AG2
pip install "acgs-lite[a2a]" # Google A2A protocol
pip install "acgs-lite[agno]" # Agno agent framework
pip install "acgs-lite[server]" # FastAPI lifecycle HTTP server
pip install "acgs-lite[all]" # All integrationsCopy the prompt below directly into Claude Code, Codex CLI, or any AI coding agent. It installs acgs-lite, runs the self-verifying quickstart, and reports pass/fail — no API keys required.
For Claude Code PreToolUse governance, use the canonical hook at
integrations/claude_code/acgs-governance-preuse.sh
and the setup guide in integrations/claude_code/README.md.
The hook calls a configurable governance sidecar before Bash, Write, Edit,
and MultiEdit run; the default /x402/check URL is an external sidecar
contract, not a bundled acgs-lite server route.
TASK: Install and verify ACGS-Lite constitutional AI governance.
STEPS:
1. Install the package:
pip install acgs-lite
2. Run the self-verifying quickstart:
python examples/agent_quickstart/run.py
3. Expected outcome — every line should show ✅:
✅ safe request passes through
✅ PII blocked by rule 'no-pii'
✅ destructive op blocked by 'no-destructive'
✅ YAML loads 3 rules
✅ Proposer can propose
✅ Golden Rule: proposer cannot validate own output
✅ Audit log: 3 entries recorded
✅ Chain integrity: verified
✅ All assertions passed — ACGS-Lite is correctly installed.
Exit code: 0
4. If any ❌ appears, report the failing assertion and the full traceback.
Do NOT set strict=False to work around failures — diagnose root cause.
CONSTRAINTS:
- No OpenAI or Anthropic API key is required for this verification step.
- Constitutional hash 608508a9bd224290 must appear in validation output.
- Run from the repo root (packages/acgs-lite/) if cloned from source.
TASK: Add ACGS-Lite governance to this codebase.
STEPS:
1. pip install acgs-lite
2. Create constitution.yaml at the project root:
constitutional_hash: "608508a9bd224290"
rules:
- id: no-pii
pattern: "SSN|social security|passport number"
severity: CRITICAL
description: Block PII exposure
- id: no-destructive
pattern: "delete|drop table|rm -rf"
severity: HIGH
description: Block destructive operations
- id: require-approval
pattern: "transfer|payment|wire"
severity: HIGH
description: Financial actions require human approval
3. Wrap the agent/LLM call with GovernedAgent:
from acgs_lite import Constitution, GovernanceEngine, AuditLog
from acgs_lite.governed import GovernedCallable
constitution = Constitution.from_yaml("constitution.yaml")
audit_log = AuditLog()
engine = GovernanceEngine(constitution, audit_log=audit_log)
@GovernedCallable(engine=engine, agent_id="my-agent")
def run_agent(prompt: str) -> str:
return your_llm_call(prompt) # replace with your LLM call
4. Verify the audit chain after every session:
assert audit_log.verify_chain()
CONSTRAINTS:
- Engine is fail-closed by default — unsafe actions raise ConstitutionalViolationError.
- Never set strict=False in production.
- Run examples/agent_quickstart/run.py to confirm the installation is healthy.
git clone https://github.com/dislovelhl/acgs-lite
cd acgs-lite/packages/acgs-lite
pip install -e ".[dev]"
python examples/agent_quickstart/run.py # exit 0 = all clearSee examples/agent_quickstart/ for the full self-verifying suite.
The GovernanceEngine sits between your agent and its tools. Every action passes through it before execution. Matching rules block or flag the action; the result is an immutable ValidationResult.
from acgs_lite import Constitution, GovernanceEngine, Rule, Severity
constitution = Constitution.from_rules([
Rule(id="no-pii", pattern=r"SSN|\bpassport\b", severity=Severity.CRITICAL),
Rule(id="no-delete", pattern=r"\bdelete\b|\bdrop\b", severity=Severity.HIGH),
])
engine = GovernanceEngine(constitution)
result = engine.validate("summarize the quarterly report", agent_id="analyst-01")
if not result.valid:
for v in result.violations:
print(f"[{v.severity}] {v.rule_id}: {v.description}")from acgs_lite import Constitution, GovernedAgent
@GovernedAgent.decorate(constitution=constitution, agent_id="summarizer")
def summarize(text: str) -> str:
return my_llm.complete(f"Summarize: {text}")
# Raises ConstitutionalViolationError if text contains violations
result = summarize("Q4 revenue was $4.2M")MACI prevents a single agent from proposing, validating, and executing the same action:
from acgs_lite import MACIEnforcer, MACIRole
enforcer = MACIEnforcer()
# Assign roles
enforcer.assign(agent_id="planner", role=MACIRole.PROPOSER)
enforcer.assign(agent_id="reviewer", role=MACIRole.VALIDATOR)
enforcer.assign(agent_id="executor", role=MACIRole.EXECUTOR)
# Proposer creates; Validator checks; Executor runs — never the same agent
proposal = enforcer.propose("planner", action="deploy v2.1 to production")
approval = enforcer.validate("reviewer", proposal)
enforcer.execute("executor", approval)Every governance decision is written to an append-only, SHA-256-chained log:
from acgs_lite import AuditLog
log = AuditLog()
engine = GovernanceEngine(constitution, audit_log=log)
engine.validate("send email to user@example.com", agent_id="mailer")
for entry in log.entries():
print(entry.id, entry.valid, entry.constitutional_hash)
# Verify chain integrity
assert log.verify_chain(), "Audit log tampered!"acgs-lite is fail-closed by default. This is a design principle, not a configuration option.
| Guarantee | Behavior |
|---|---|
| Engine exception | Validation raises ConstitutionalViolationError; the action is blocked, not silently passed |
| Missing constitution | Engine refuses to initialize; no degraded-mode passthrough |
| Rule match | Action is blocked unless the rule explicitly sets workflow_action: warn |
| Audit write failure | Logged at warning level; does not unblock the action |
| MACI misconfiguration | Warning raised at startup; enforcement is advisory unless enforce_maci=True |
| MCP server strict-mode | MCP tools call validate(strict=False) per request and do not mutate engine.strict; exceptions cannot leave strict mode permanently disabled |
Note: The MCP integration above is non-mutating: it passes
validate(strict=False)per call and never touchesengine.strict, so concurrent callers and shared engines are unaffected. Other integrations that need per-call non-strict validation should preferengine.validate(..., strict=False)overengine.non_strict()for the same reason —non_strict()mutates shared state and is unsafe under concurrency.
To opt into fail-open (e.g., for testing), you must set it explicitly:
engine = GovernanceEngine(constitution, strict=False) # explicit; off by defaultEnforcement actions progress from least to most restrictive:
warn → block → block_and_notify → require_human_review → escalate_to_senior → halt_and_alert
Not all layers are equally hardened. Use this table to calibrate trust in each area:
| Component | Status | Notes |
|---|---|---|
GovernanceEngine — rule validation |
✅ Stable | Core hot path; Aho-Corasick matcher, fail-closed exceptions |
Constitution — YAML loading, rule parsing |
✅ Stable | Hash-pinned; schema-validated |
Rule, Severity, ValidationResult |
✅ Stable | Stable data model; additive changes only |
MACIEnforcer — role separation |
✅ Stable | Role checks are enforced; pass enforce_maci=True for hard failures |
AuditLog — SHA-256 chained trail |
✅ Stable | Thread-safe append-only; chain verification tested |
GovernedAgent — drop-in wrapper |
✅ Stable | Synchronous and async paths covered |
| OpenAI / Anthropic / LangChain adapters | ✅ Stable | Thin validated wrappers; covers completions and streaming |
| Constitution lifecycle API (HTTP) | 🔶 Beta | Draft/review/activate/rollback endpoints are functional; API may evolve |
| SQLite bundle store, lifecycle persistence | 🔶 Beta | WAL-mode; covers single-node; multi-writer not yet hardened |
acgs assess compliance mapping |
🔶 Beta | 18-framework coverage; control mappings improve with each release |
| MCP server integration | 🔶 Beta | Single-node; production use requires your own transport hardening |
| Intervention / quarantine / halt workflow | 🔶 Beta | Full path functional; thread-safety hardened; API may evolve |
| Z3 constraint verifier | 🧪 Experimental | Useful for high-risk scenarios; requires separate Z3 install |
| Lean 4 / Leanstral proof certificates | 🧪 Experimental | Requires mistralai extra and external Lean kernel |
| Newer framework adapters (Agno, A2A, LiteLLM, Mistral) | 🧪 Experimental | Community-contributed; test coverage varies |
| Layer | Status | What you get |
|---|---|---|
GovernanceEngine |
Stable | YAML rules, deterministic validation, fail-closed enforcement |
| MACI role separation | Stable | Proposer / Validator / Executor enforced at runtime |
| Audit Trail | Stable | SHA-256 chained, SQLite-backed, queryable, exportable |
GovernedAgent wrapper |
Stable | Drop-in decorator for OpenAI, Anthropic, LangChain, MCP, etc. |
| Intervention & Quarantine | Stable | require_human_review, halt_and_alert, quarantine actions |
CLI (acgs validate, audit, halt) |
Stable | Full local & CI usage |
Everything else (constitution lifecycle API, formal verification with Z3/Lean, 18-framework compliance mapping) is Beta / Experimental and clearly marked in the Component Stability table above.
Are you running acgs-lite in production? Open a PR or issue to add your organization here. Early adopters shape the roadmap — we prioritize hardening the layers you actually use.
| Organization / Project | Use case | Since |
|---|---|---|
| (your org here) | (e.g., pre-execution guard for OpenAI function calls) | (e.g., v2.9) |
from acgs_lite.integrations.openai import GovernedOpenAI
from openai import OpenAI
client = GovernedOpenAI(OpenAI(), constitution=constitution)
response = client.chat.completions.create(
model="gpt-4o",
messages=[{"role": "user", "content": "Analyze the contract"}],
)from acgs_lite.integrations.anthropic import GovernedAnthropic
import anthropic
client = GovernedAnthropic(anthropic.Anthropic(), constitution=constitution)
message = client.messages.create(
model="claude-opus-4-5",
max_tokens=1024,
messages=[{"role": "user", "content": "Review this code"}],
)from acgs_lite.integrations.langchain import GovernanceRunnable
from langchain_openai import ChatOpenAI
governed_llm = GovernanceRunnable(
ChatOpenAI(model="gpt-4o"),
constitution=constitution,
)
result = governed_llm.invoke("Translate this document")Start a governance server that any MCP-compatible agent can query:
acgs serve --host 0.0.0.0 --port 8080from acgs_lite.integrations.mcp_server import create_mcp_server
app = create_mcp_server(constitution=constitution)ACGS maps governance controls to 18 regulatory frameworks. Run acgs assess to generate a compliance report:
acgs assess --framework eu-ai-act --output report.pdf| Framework | Coverage | Key Controls |
|---|---|---|
| EU AI Act (High-Risk) | Art. 9, 10, 13, 14, 17 | Risk management, human oversight, transparency |
| NIST AI RMF | 7 / 16 functions | Govern, Map, Measure, Manage |
| SOC 2 + AI | 10 / 16 criteria | CC6, CC7, CC9 trust service criteria |
| HIPAA + AI | 9 / 15 safeguards | PHI detection, access controls, audit controls |
| GDPR Art. 22 | 10 / 12 requirements | Automated decision-making, right to explanation |
| CCPA / CPRA | 8 / 10 rights | Opt-out, data minimisation, transparency |
| ISO 42001 | Clause 6, 8, 9, 10 | AI management system controls |
| OWASP LLM Top 10 | 9 / 10 risks | Prompt injection, insecure output, data poisoning |
For the highest-risk scenarios, ACGS supports mathematical proof of safety properties.
from acgs_lite.integrations.z3_verifier import Z3ConstraintVerifier
verifier = Z3ConstraintVerifier()
result = verifier.verify(
action="transfer $50,000 to external account",
constraints=["amount <= 10000", "recipient in approved_list"],
)
print(result.satisfiable, result.counterexample)from acgs_lite import LeanstralVerifier
verifier = LeanstralVerifier() # requires mistralai extra
certificate = await verifier.verify(
property="∀ action : Action, action.amount ≤ 10000",
context={"action": "transfer $5,000"},
)
print(certificate.kernel_verified) # True only if Lean kernel accepted proof
print(certificate.to_audit_dict()) # attach to AuditEntry| Operation | Latency | Notes |
|---|---|---|
| Rule validation (Python) | Measured per workload | Aho-Corasick multi-pattern; depends on rules and text size |
| Rule validation (Rust) | Measured per workload | Optional Rust extension; benchmark your target hardware |
| Engine batch (100 rules) | Reference benchmark only | Depends on rule count, severities, and context size |
| Audit write (JSONL) | Reference benchmark only | Append-only, SHA-256 chained; storage latency matters |
| Compliance report | Reference benchmark only | Framework count, cache state, and report scope affect latency |
# Validate a single action
acgs validate "send email to user@corp.com" --constitution rules.yaml
# Run governance status check
acgs status
# Generate compliance report
acgs assess --framework hipaa --output hipaa_report.pdf
# Audit log inspection
acgs audit --tail 20
acgs audit --verify-chain
# Start MCP governance server
acgs serve --port 8080
# EU AI Act Art. 14(3) kill switch
acgs halt --agent-id agent-01 --reason "anomalous behaviour detected"
acgs resume --agent-id agent-01| Guide | Description |
|---|---|
| Examples | Canonical demo path: block, audit, then MCP |
| Quickstart | Up and running in 5 minutes |
| Architecture | Engine internals, MACI deep dive |
| Integrations | OpenAI, Anthropic, LangChain, MCP, A2A |
| Compliance | 18-framework regulatory mapping |
| CLI Reference | Full command reference |
| Why Governance? | The case for deterministic guardrails |
| OWASP LLM Top 10 | ACGS coverage of each risk |
| Testing Guide | Testing governed agents |
| Constitution Lifecycle API | HTTP endpoints for draft, review, eval, activation, rollback, and reject |
We welcome contributions! See CONTRIBUTING.md for guidelines.
git clone https://github.com/dislovelhl/acgs-lite
cd acgs-lite/packages/acgs-lite
pip install -e ".[dev]"
pytest tests/ --import-mode=importlibApache-2.0. See LICENSE for details.
Commercial enterprise licences (SLA, support, air-gapped deployment) available at acgs.ai.
Constitutional Hash: 608508a9bd224290 — embedded in every validation path.