Policy Authoring and Analysis with AI

Happy computer aiding in policy authoring and analysis

In my last post, I argued that policy does not belong in an LLM prompt. Authorization is about authority and scope, not about persuading a language model to behave. Prompts express intent; policies define what is allowed. Mixing the two creates systems that are brittle at best and dangerous at worst.

That raises the obvious follow-up question: So where can AI actually help?

The answer, in practice, is policy authoring and policy analysis. This doesn’t show up in architectural diagrams, but in the day-to-day work of writing, reviewing, and changing policies. What surprised me while working through this material is how tightly those two activities are coupled in practice.

Where AI Can Help

In real systems, policy authoring rarely starts with code. Instead, it often starts with questions:

  • Why is this request allowed?
  • What would cause it to be denied?
  • How narrow is this rule, really?
  • What happens if I change just this one thing?

Those are analysis questions, but they arise before and during authoring, not after. As soon as you start writing or modifying policies, you're already analyzing them. AI tools are well-suited to this part of the work. They can:

  • Explain existing policy behavior in plain language
  • Say why access will be allowed or denied in specific scenarios
  • Propose alternative formulations
  • Surface edge cases and trade-offs you might miss

They are not deciding access. Rather, they're helping you reason about policies that remain deterministic and externally enforced.

A Concrete Place to Start

To help make this clearer, I put together a small GitHub repository that you can use to work through this yourself. The repository reuses the ACME Cedar schema and policies I used for examples in Appendix A of my book, Dynamic Authorization. This repo adds just enough structure to support hands-on, AI-assisted work. Take a minute and review these three files:

  • ai/cursor/README.md explains how the repo is meant to be used and, just as importantly, what it is not for.
  • ai/cursor/authoring-guidelines.md lays out the human-in-the-loop constraints. These aren't optional suggestions; they're the safety rails.
  • ai/cursor/starter-prompt.md defines how the AI is expected to behave.

That starter prompt matters more than it might seem. It's not there for convenience. It shapes how the AI interprets context, authority, and its own role. Rather than expressing authorization rules, the starter prompt limits the AI's scope of participation: it can propose, explain, and compare policy options, but it cannot invent model elements or make decisions.

Take a minute to look around the rest of the repo. You'll find a schema file, a file with defined entities, and a number of policies. You'll also see a same request and an expected response. A real analysis might have many of those. I asked Cursor to incorporate all of these in it's context and then gave it the starter prompt. That set me up for using Cursor to author and analyze the policy set in the repo.

Authoring and Analyzing are Complementary Activities

When working with real authorization policies, authoring and analysis are best understood as complementary activities rather than separate phases. You do not finish writing a policy and then analyze it later. Instead, analysis continuously shapes how policies are authored, refined, and understood.

That interplay becomes clear as soon as you start with a concrete request, such as:

{
  "principal": "User::\"kate\"",
  "action": "Action::\"view\"",
  "resource": "Document::\"q3-plan\""
}

The first step is analytical. Before changing anything, you need to establish the current behavior. Asking why this request is permitted forces the existing policy logic into the open. A useful explanation should reference a specific policy and identify the relationship or condition on the resource that makes the request valid. This establishes the current behavior before attempting to change it.

Once that behavior is understood, authoring questions follow naturally:

  • What would need to change for this request to be denied?
  • How could that change be made while leaving other customer access unchanged?
  • Where should that change live so that intent remains clear and the policy set remains maintainable?

These questions blur any clean separation between authoring and analysis. Understanding current behavior is analysis. Exploring how a specific outcome could change is authoring. In practice, the two alternate rapidly, each shaping the other.

AI assistance fits naturally into this loop. It can explain existing decisions, propose multiple ways to achieve a different outcome, and help compare the implications of those alternatives. For a narrowly scoped change like this one, those alternatives might include introducing a new forbid policy, narrowing an existing permit policy, or expressing the exception explicitly using an unless clause.

What matters is not that the AI can generate these options, rather it's that a human evaluates them. Although the alternatives may be functionally equivalent, they differ in clarity, scope, and long-term maintainability. Choosing between them is a design decision, not a mechanical one.

AI accelerates the conversation between authoring and analysis, making both activities more explicit and more efficient, while leaving responsibility for authorization behavior firmly with the human author.

The Human in the Loop

When using AI to assist with policy work, the most important discipline is how you engage with it. The value comes not from asking for answers, but from asking the right sequence of questions, and reviewing the results critically at each step.

Begin by asking the AI to explain the system's current behavior. With the schema, policies, entities, and a concrete request included as context, ask a question such as:

"Which policy or policies permit this request, and what relationship on the resource makes that true?"

Review the response carefully. A good answer should reference a specific policy and point to a concrete condition. In the case of the example in the repo, you might get an answer that references membership in a reader relationship on the document. If the response is vague, or if it invents attributes or relationships that do not exist in the model, stop and correct the context before proceeding. That failure is a signal that the AI is reasoning without sufficient grounding.

Next, ask the AI to restate the authorization logic in plain language. For example:

"Explain this authorization decision as if you were describing it to a product manager."

This step is critical. It tests whether the policy logic aligns with human intent. If the explanation is surprising or difficult to defend, that is not a problem with the explanation, it is a signal that the policy itself deserves closer scrutiny.

Once you understand the current behavior, introduce a small hypothetical change. Without modifying anything yet, ask a question like:

"What change would be required to deny this request while leaving other customer access unchanged?"

The AI may respond in several ways. One common suggestion is to add a new forbid policy that explicitly denies the request. That can be a valid approach in some situations, but it is rarely the only option, and it is often worth exploring alternatives before expanding the policy set.

You can then refine the discussion with a follow-up question:

"What if instead of adding a new policy, we wanted to modify one of the existing policies to do this?"

In response, the AI may suggest modifying an existing permit policy by adding an additional condition to its when clause, typically an extra conjunction in the context section of the policy that explicitly excludes this principal and resource. This narrows the circumstances under which the permit applies without introducing a new rule.

You can refine the design further by asking:

"What if I wanted to do this by adding an unless clause instead of putting a conjunction in the when clause?"

The AI may then refactor the proposal to use an unless clause that expresses the exception more directly. In many cases, this reads more clearly, especially when the intent is to describe a general rule with a specific carve-out.

At this point, it is tempting to treat these alternatives as interchangeable. They may be syntactically valid and semantically equivalent for a specific request, but they are not equivalent from a design perspective. Choosing between a new forbid policy, a narrower when clause, or a more readable unless clause is a human judgment about clarity, intent, and long-term maintainability. These are decisions about how authority should be expressed, not questions a language model can answer on its own.

This sequence illustrates the core of a human-in-the-loop workflow. The AI can generate options, surface trade-offs, and refactor logic, but it does not decide which policy best reflects organizational intent. The final responsibility for authorization behavior remains with the human reviewer, who must understand and accept the consequences of each change before it is applied.

Guardrails that Make AI Assistance Safe

When AI is embedded directly into the policy authoring and analysis loop, guardrails are not optional. They are what keep the speed and convenience of AI from turning into silent expansion of authority.

In practice, many of these guardrails are enforced through the starter prompt itself. The prompt establishes how the AI is expected to behave, what it may assume, and what it must not invent. The remaining guardrails are enforced through human review.

Treat the Schema as the Source of Truth

The starter prompt explicitly instructs the AI to treat the schema and existing policies as the source of truth. This is essential. The schema defines the universe of valid entities, actions, attributes, and relationships. Any suggestion that relies on something outside that schema is wrong by definition.

If an AI response introduces a new attribute, relationship, or entity that does not exist, stop immediately. That is not a creative proposal—it is a modeling error.

Require concrete requests and outcomes

The starter prompt requires the AI to reason about concrete requests and expected outcomes rather than abstract policy logic. This forces proposed changes to be evaluated in terms of actual behavior:

  • Why is this request permitted?
  • What change would cause it to be denied?
  • What other requests would be affected?

Anchoring discussion in concrete requests makes unintended scope expansion easier to spot.

Bias Toward Least Privilege

The starter prompt biases the AI toward least-privilege outcomes and narrowly scoped changes. Without this bias, AI tools often propose solutions that technically satisfy the question but widen access more than intended.

Broad refactors and sweeping rules should be treated with skepticism unless they are clearly intentional and carefully reviewed.

Separate Exploration from Acceptance

The starter prompt makes it clear that AI output is advisory. The AI can propose, explain, and refactor policy logic, but it does not apply changes or decide which alternative is correct.

Every proposed change must be reviewed manually, line by line, and evaluated in the context of the full policy set. If a change cannot be explained clearly in plain language, it should not be accepted.

Preserve Human Accountability

Authorization policies express decisions about authority, and those decisions have real consequences. The starter prompt reinforces that responsibility for those decisions remains with the human author.

The policy engine evaluates access deterministically, but humans remain accountable for what that access allows or denies. If you would not be comfortable explaining a policy change to an auditor or stakeholder, that discomfort is a signal to revisit the design.

Where AI Belongs—and Where it Doesn't

Like I emphasized in my previous post, don't use AI to decide who is allowed to do what. Authorization is about authority, scope, and consequence, and those decisions must remain deterministic, reviewable, and enforceable outside of any language model.

But AI is a great tool for policy authoring and analysis. Used correctly, it helps surface intent, explain behavior, and explore design alternatives faster than humans can alone. It makes the reasoning around policy more explicit, not less.

But that benefit only materializes when boundaries are clear. Prompts must not encode access rules. Schemas must remain the source of truth. Concrete requests must anchor every discussion. And humans must remain accountable for every change that affects authority.

AI can accelerate policy work, but it cannot take responsibility for it. Treat it as a powerful assistant in design and analysis, and keep it far away from enforcement and decision-making. That separation is not a limitation—it's what makes AI useful without making it dangerous.


Photo Credit: Happy computer aiding in policy authoring and analysis from DALL-E (public domain)


AI Is Not Your Policy Engine (And That's a Good Thing)

Confused computer

When building a system that uses an large language models (LLMs) to work with sensitive data, you might be tempted to treat the LLM as a decision-maker. They can summarize documents, answer questions, and generate code, so why not let them decide who gets access to what? Because authorization is not a language problem—at least not a natural language problem.

Authorization is about authority: who is allowed to do what, with which data, and under which conditions. That authority must be evaluated deterministically and enforced consistently. Language models, no matter how capable, are not deterministic or consistent. Recognizing this boundary is what allows AI to be useful, rather than dangerous, in systems that handle sensitive data.

The Role of Authorization

Authorization systems exist to answer a narrow but critical question: is this request permitted, and if so, what does that permission allow? In modern systems, this responsibility is usually split across two closely related components.

The policy decision point (PDP) evaluates policies against a specific request and its context, producing a permit or deny decision based on explicit, deterministic policy logic. The policy enforcement point (PEP) enforces that decision by constraining access. It filters data, restricts actions, and exposes only authorized portions of a resource.

Authorization does not generate text, explanations, or instructions. It produces a decision and an enforced scope. Those outputs are constraints, not mere guidance, and they exist independently of any AI system involved downstream. Once they exist, everything that follows can safely assume that access has already been determined.

The Role of the Prompt

This is why access control does not belong in the prompt. You might think it's OK to encode authorization rules directly into a prompt by including instructions like "only summarize documents the user is allowed to see" or "do not reveal confidential information." While well intentioned, these instructions confuse guidance with enforcement.

Prompts describe what we want a model to do. They do not—and cannot—guarantee what the model is allowed to do. By the time a prompt is constructed, authorization should already be finished. If access rules appear in the prompt, it usually means enforcement has been pushed too far downstream.

How Authorization and Prompts Work Together

To understand how authorization and prompts fit together in an AI-enabled system, it helps to focus on what each part of the system produces. Authorization answers questions of authority and access, while prompts express intent and shape how a model responds. These concerns are related, but they operate at different points in the system and produce different kinds of outputs. Authorization produces decisions and enforces scope. Prompt construction assumes that scope and uses it to assemble context for the model.

The following diagram shows this relationship conceptually, emphasizing how outputs from one stage become inputs to the next.

Conceptual flow in an LLM with sensitive data
Separation of responsibility is critical to protect sensitive data (click to enlarge)

A person begins by expressing intent through an application. The service evaluates that request using its authorization system. The PDP produces a decision, and the PEP enforces it by constraining access to data, producing an authorized scope. Only data within that scope is retrieved and assembled as context. The prompt is then constructed from two inputs: the user's intent and the authorized context. The LLM generates a response based solely on what it has been given.

At no point does the model decide what sensitive data it is allowed to use for a response. That question has already been answered and enforced before the prompt ever exists.

Respecting Boundaries

This division of responsibility is essential because of how language models work. Given authorized context, LLMs are extremely effective at summarizing, explaining, and reasoning over that information. What they are not good at—and should not be asked to do—is enforcing access control. They have no intrinsic understanding of obligation, revocation, or consequence. They generate plausible language, not deterministic, authoritative decisions.

Respecting authorization boundaries is a design constraint, not a limitation to work around. When those boundaries are enforced upstream, language models become safer and more useful. When they are blurred, no amount of careful prompting can compensate for the loss of control.

The takeaway is simple. Authorization systems evaluate access and enforce scope. Applications retrieve and assemble authorized context. Prompts express intent, not policy. Language models operate within boundaries they did not define.

Keeping these responsibilities separate is what allows AI to act as a powerful assistant instead of a risk multiplier, and why AI is should never be used as your policy engine.


Photo Credit: Confused Computer from Generated by DALL-E ((public domain))


The First Agentic Internet Workshop

Attendee Map for AIW I

On October 24, 2025, the day after IIW 41 wrapped up, we held the first-ever Agentic Internet Workshop (AIW1) at the Computer History Museum. Hosting it right after IIW 41 made logistics easier and allowed us to build on the momentum—and the brainpower—already in the room.

Like IIW, AIW1 followed an Open Space unconference format, where participants proposed sessions and collaboratively shaped the agenda in the morning at opening circle. With more than 40 sessions across four time slots, the result was a fast-moving day of rich conversations around the tools, architectures, and governance needed for the agentic internet.

Dazza Greenwood conducts a session
Dazza Greenwood conducts a session (click to enlarge)

We welcomed attendees from 10 countries, with the U.S., Canada, Germany, Japan, and Switzerland most represented. The geographic spread (see map above) reflected growing international interest in agents, autonomy, and infrastructure. We expect that trend to accelerate as these ideas move from prototypes to deployed systems.

Topics and Themes

IIW 41 was about the state of identity. AIW1 asked: what happens when we give identity the power to act?

Discussions ranged from deeply technical to philosophically provocative. Participants tackled the infrastructure of agentic browsers, agent identity protocols, and governance models like MCP, KERI, and KYAPAY. We saw sessions on AI agent policy enforcement, private inference, and how to design trust markets and legal frameworks that support human-centric agency.

Agenda Wall
Agenda Wall (click to enlarge)

We also explored cultural and narrative lenses, from the metaphor of Murderbot to speculative design sessions on agentic AI glasses, human-in-the-loop messaging, and digital media provenance. Questions like "Do you want agents acting without your consent?" and "What is agenthood, really?" brought the conversation to the edge of ethics, autonomy, and technical realism.

Throughout the day, a recurring theme was trust, how it's built, signaled, enforced, and sometimes broken in a world of interoperating agents.

Looking Ahead

We're just getting started. AIW1 was both a proof of concept and a call to action. The conversations launched here are already shaping work in standards groups, startups, and community labs.

Watch for announcements about AIW2 in 2026. We'll be back—with more sessions, broader participation, and even sharper questions.

Closing circle
Closing circle (click to enlarge)

You can see all of Doc's fantastic photos of AIW I here.


Photo Credit: Photos of AIW I from Doc Searls (CC-BY-4.0)


Internet Identity Workshop XLI Report

IIW41 Attendee Map

Twice a year, the Internet Identity Workshop brings together one of the most engaged and forward-thinking communities in tech. In October 2025, we gathered for the 41st time at the Computer History Museum in Mountain View, California. As always, the Open Space unconference format let the agenda emerge from the people in the room. And once again, the room was full of energy, ideas, and deep dives into the problems and promise of digital identity.

Kaliya starting opening circle on Day 1

This time, we also hosted a special Agentic Internet Workshop on October 24, immediately following IIW. It followed the same unconference format, focusing on how personal agents, identity, and infrastructure come together to support agency online. That event deserves its own write-up, so I'll cover it in a separate post.

Whether you're working on self-sovereign identity, verifiable credentials, digital wallets, or the broader architecture of the agentic internet, IIW remains the place where serious builders and thoughtful critics come to talk, sketch, debate, and collaborate. Here's a look at how it went.

Attendance

Internet Identity Workshop XLI (that's 41 for those who haven't picked up Roman numerals as a hobby) brought together 287 participants at the Computer History Museum in October 2025. That's a slight dip from the spring's IIW 40, which topped 300, but still a strong showing, especially in a field where the most impactful conversations often happen in smaller, focused groups.

IIW Session

The sustained numbers are a testament to the growing interest in decentralized identity, personal agency online, and the agentic internet. As always, the hallway track was just as rich as the sessions, and the energy was unmistakable.

Geographic Diversity

We continued to see excellent geographic representation at IIW 41, particularly from within the U.S., where California dominated as usual. Top contributing cities included San Jose (12 attendees), San Francisco (11), and Mountain View (10)—the heart of Silicon Valley is clearly still in it. We continue to see good participation from Japan (11) and had a good delegation from South Korea (4) as well. We saw fewer attendees from Europe and Canada and that's a shame. They're doing important work and their voices are needed in the global identity conversation.

Small conversartions make IIW special

Notably, this time we saw increased participation from Central and South America, a trend we hope continues. IIW benefits tremendously from global perspectives, especially as identity challenges and solutions are shaped by local contexts. That said, Africa remains unrepresented, a gap we'd love to see filled in future workshops. If you know identity thinkers, builders, or policy folks in African countries, point them our way, we'd love to extend the conversation. We'll be holding an IIW-InspiredTM Regional event. DID:UNCONF Africa happening in February for the second time. We'll work on getting some of those folks over to participate in the global identity conversation next time.

Topics and Themes

As always, the agenda at IIW was built fresh each morning, reflecting the real-time priorities and curiosities of the people in the room. Over the course of three days, that emergent structure revealed a lot about where the digital identity community is—and where it's heading.

Agenda Wall being created on day 2 (8x speedup)
Agenda Wall being created on day 2 (8x speedup) (click to view)

One of the most visible throughlines was SEDI (State-Endorsed Decentralized Identity). From foundational overviews to practical demos, governance conversations, and even speculative provocations ("Is Compromising a SEDI Treasonous?"), SEDI became a focal point for discussions about infrastructure, policy, and the nature of institutional trust.

OpenID4VC also had a major presence, with sessions spanning conformance testing, server-to-server issuance, metadata schemas, and questions of organizational adoption. This wasn't just theory—there were working demos, hackathon previews, and implementation notes throughout.

On the technical front, we saw renewed energy around:

  • Agent-centric architectures, including agent-to-agent authorization, trust registries, and personal AI agents.
  • Key management and recovery, especially via KERI, ACDC, and protocols like CoralKM.
  • Post-quantum resilience, with deep dives into cryptographic agility and the readiness of various stacks.

Sessions also ventured into user experience and adoption: passkey wallets, native apps, biometric credentials, and real-world policy interactions. There were thoughtful explorations of friction: what gets in the way of people using these tools? And what happens when systems designed for power users collide with human realities?

Demo session
Demo session (click to enlarge)

Meanwhile, the social and ethical layers of identity weren't neglected. We heard about harms, digital fiduciaries, and the politics of age assurance and identity verification. Sessions like "The End of the Global Internet" and "Digital Identity Mad-Libs" reminded us that the stakes are not just technical, they're societal.

Importantly, global perspectives played a growing role. From the UN's refugee identity challenges to discussions of Germany's EUDI wallet and OpenID in Japan, it's clear the community is engaging with a wider set of implementation contexts and constraints.

All told, the IIW 41 agenda reflected a community in motion, technically ambitious, intellectually curious, and increasingly attuned to the human systems it hopes to serve. The book of proceedings should be out soon with more details.

This Community Still Matters

IIW 41 reminded us why this community matters. It's not just the sessions, though those were rich and varied, but the way ideas flow between people, across disciplines, and through time. Many of the themes from this workshop—agent-based identity, governance models, ethical frameworks—have been incubating here for years. Others, like quantum resilience or national-scale deployments, are just now stepping into the spotlight.

Whiteboarding
Whiteboarding (click to enlarge)

If there was a feeling that ran through the week, it was momentum. The stack is maturing. The specs are converging. The real-world stakes are clearer than ever.

Huge thanks to everyone who convened a session, asked a hard question, or scribbled a diagram on a whiteboard. You're why IIW works.

Mark your calendars now: IIW 42 is coming in the spring, April 28-30, 2026. Until then, keep building, keep questioning. And, maybe, even send in a few notes for that session you forgot to write up.

You can see all of Doc's terrific photos of IIW 41 here.


Photo Credit: IIW XLI The 41st IIW from Doc Searls (CC BY 4.0)


Visa Isn't Centralized—and Neither Is First Person Identity

Young woman using Visa

Recently, I heard someone describe Visa as a "centralized" model. That's a common misconception. Visa doesn't operate a single, central database of all transactions. Instead, it provides a trust framework—a set of rules, standards, and infrastructure—that allows thousands of banks, processors, and merchants to interoperate. The actual accounts, balances, and customer relationships remain distributed across the network.

We already live with another example of a trust framework in the physical world: the driver's license. By law, states issue these credentials according to a well-defined process, and other parties—from bars to TSA agents to car dealerships—accept them as proof of age or identity. What's striking is that most of those parties have no formal contract with the issuing state. The framework itself, enshrined in statute and social practice, creates the trust. People rely on the attributes the license carries, not because they have negotiated agreements, but because the framework guarantees its validity.

This is a helpful way to think about first-person identity. Credentials, issued according to this model, make it possible to build trust frameworks for many different ecosystems, just as Visa did for payments or states have done with licenses. A trust framework defines how participants interoperate, but it doesn't centralize everyone's data or relationships.

Here's the key difference: payments is a relatively simple domain compared to identity. You can imagine one Visa handling payments worldwide. Identity, by contrast, has vastly more requirements, policies, and contexts. There won't be a single "identity Visa." Instead, there will be tens of thousands: ecosystem-specific trust frameworks for finance, healthcare, education, workforce, commerce, government services, and beyond. Each will be tailored to its own needs, but they will all be possible because of the same first-person identity foundation.

Fraud prevention, auditability, and traceability are absolutely essential. But those aren't functions of centralization. They come from well-designed credentials and trust frameworks. First-person identity doesn't reject the Visa model—it makes it possible to replicate it, many times over, in the far more complex world of identity.


Photo Credit: Young woman paying with Visa generated using ChatGPT (public domain)


From IIW to the Agentic Internet Workshop

AIW Logo

We're just a few weeks away from the next Internet Identity Workshop (IIW), happening October 21–23, 2025 at the Computer History Museum in Mountain View. I always look forward to those three days—it's the one place where the people building the protocols and systems that underpin digital identity come together, set the agenda themselves, and move the work forward in real time.

This year, we're trying something new. On Friday, October 24, right after IIW wraps up, we'll be hosting the first-ever Agentic Internet Workshop (AIW) at the same venue. Think of it as IIW's younger sibling: same open space format, same collaborative energy—but focused squarely on the agentic internet.

Why? Because we're at an inflection point. Together, we now face the shift toward an internet where agents—AI-powered and otherwise—interact, negotiate, and make decisions on our behalf. Just like IIW gave us space to define protocols like OAuth, OpenID Connect, and Decentralized Identifiers, AIW is meant to give us a neutral ground to figure out the next generation of protocols for an agentic world.

If you've got work in this space—or even just a deep curiosity about where it's headed—I hope you'll stay the extra day. We're capping attendance at 200 people to keep it intimate, and pricing starts at $150 for independents and startups.

👉 Learn more and register here

And one more thing: both IIW and AIW depend on sponsors to keep costs low and the community strong. If you or your organization would like to help support either event, please get in touch with me.


Early Access to Dynamic Authorization

Cover for Dynamic Authorization

I’m excited to share that the first six chapters of my new book, Dynamic Authorization: Adaptive Access Control, are now available through Manning’s Early Access Program (MEAP). You can start reading today at Manning’s site. As part of the launch, Manning is offering 50% off.

I wrote this book because I noticed something curious in the identity world: while authentication has largely become a solved problem—at least technically—authorization remains widely misunderstood. Many organizations still rely on outdated models like static role-based access control, which don’t hold up in today’s distributed, collaborative, and zero-trust environments. But the landscape is changing. New tools, such as Cedar, give us the means to create authorization systems that provide better security while also making life easier for employees and customers.

The first chapters of the book lay out this problem space and begin introducing modern approaches. Chapter 1 frames the challenge of authorization in today’s systems. Chapters 2 introduces the broader topic of digital identity, while chapter 3 drills down on authentication. Chapter 4 introduces authorization with chapters 5 outlining old-school static authorization tecahniques and chapter 6 diving into dynamic models: relationship-based access control (ReBAC), attribute-based access control (ABAC), and policy-based access control (PBAC). To make these ideas concrete, I use practical examples drawn from a fictional company, ACME Corp., to motivate the material and show how it is used in life-like scenarios.

Looking ahead, later chapters will introduce Cedar, the open-source policy language from AWS, and compare it with other frameworks like OPA/Rego and XACML. I’ll also cover how to implement policies effectively, treat them as code, and test them for reliability. My goal is to help practitioners understand both the “why” and the “how” of dynamic authorization so they can design systems that adapt to real-world complexity.

If you’ve ever struggled with brittle role hierarchies, confusing permission schemes, or the tension between security and usability, this book is for you. And since it’s in MEAP, you can start reading now and follow along as new chapters are released. I'm open to your feedback and suggestions.


Components for Web Apps

The web has come a long way since static HTML. Even so, building user interfaces is still often an exercise in complexity: frameworks layered on frameworks, intricate build tools, and brittle glue code tying everything together. But there's another way—native, composable building blocks, pieces of UI that can be easily reused, reasoned about, and combined without pulling in half the npm registry. That's the promise of web components, and it's why tools like XMLUI are exciting. They let us focus on function and structure, not scaffolding and ceremony.

I'm going to skip the technical deep dive. You can get that on the XMLUI site or in Jon Udell's excellent XMLUI introduction. But even just a simple example can show the power of components.

Imagine you need a table that displays updated information about the status of London tube stations.

Tube Stations UI

Normally, you'd link to an API, fetch the data, loop over the JSON, and build the DOM with JavaScript or a framework like React. Or...you could do it with XMLUI like this:

<App>
    <Table data="https://api.tfl.gov.uk/line/mode/tube/status">
        <Column bindTo="name" />
        <Column header="status" >
            {$item.lineStatuses[0].statusSeverityDescription}
        </Column>
    </Table>
</App>

This is a web component in action: you name the data source, define the structure, and let XMLUI handle the heavy lifting. And this is just scratching the surface, there are multiple component types, styling options, even MCP (Multi-Component Pages) interfaces for multi-agent or AI-powered applications.

One reason I'm personally excited about XMLUI is that I've been looking for a way for Picos to create their own interfaces, rather than relying on an external React app, like we did with Manifold. Picos—distributed, autonomous agents with lightweight logic—used to have UI capabilities. XMLUI components might allow them to regain that ability, natively and declaratively. Bruce Conrad has already been experimenting with this, and I love the idea of using a tool we don't have to build ourselves. Lightweight, component-driven, and web-native, XMLUI seems like a natural fit for Pico-based architectures.

XMLUI isn't just another UI framework, it's a shift toward declarative, modular web development that feels especially well-suited to the world of Picos. By letting components define themselves, serve themselves, and run directly into the browser, we can finally build UIs that are as lightweight and autonomous as the agents they represent. There's still more to explore, but I'm optimistic that XMLUI can help bring back a native interface layer for Picos that's simple, composable, and entirely in their control for easier development and deployment.


Let's Stop Phoning Home

Phoning Home

When you're the parent of a teenager out late at night, the prospect of them phoning home might seem reassuring. But that same action—to check in, to report back—is also the dream of every government that wants to monitor its citizens and every company seeking to surveil its customers.

This concern sits at the heart of the No Phone Home movement, which advocates for digital identity systems that don't phone home—that is, digital credentials that do not silently report back to their issuers or some central authority every time they're used. While this kind of telemetry can be marketed as a security or interoperability feature, in reality, it opens the door to a kind of invisible surveillance infrastructure that undermines privacy and individual freedom.

I've added my name as a signatory to the No Phone Home campaign, joining a broad coalition of organizations and individuals who believe that digital identity should serve people, not institutions. The signatories include respected organizations like the ACLU, the EFF, and Brave Software, as well as numerous experts with deep experience in digital identity, cryptography, and privacy advocacy.

Enabling Surveillance...and Control

The phrase "phone home" might conjure nostalgic images of a homesick alien, but in the context of digital credentials, it's far more sinister. When a credential—like a mobile driver's license or digital vaccine certificate—relies on contacting a central authority each time it's presented, it creates a record of where and how it was used. Even if that data isn't stored today, the potential exists. That built-in capacity for surveillance is what the No Phone Home campaign seeks to dismantle.

What's more, the very architecture of phone-home systems inherently concentrates power. It privileges the issuer over the holder, undermining the principles of user control and consent. It's not hard to imagine a world where access to services—buying a train ticket, checking into a hotel, entering a public building—depends on real-time authorization or permission from a government server or corporate backend.

Shoshana Zuboff, in The Age of Surveillance Capitalism, lays bare the business model that feeds off this architecture. Her thesis is chilling: surveillance is no longer a byproduct of digital services—it is the product. As she puts it, "Surveillance capitalism unilaterally claims human experience as free raw material for translation into behavioral data." In this world, "phoning home" isn't a safety feature—it's the toll you pay for participation.

Against that backdrop, the No Phone Home movement demands digital identity architectures where credentials are presented to verifiers without any need to check back with the issuer. This model aligns with the principles of self-sovereign identity and decentralization. It shifts the balance of power, placing control squarely in the hands of the individual.

Systems that Phone Home

Many digital identity systems are designed to contact a central server—typically the issuer or identity provider—whenever an identity credential is presented. This is especially true in federated identity systems, where verifying a token often means checking with the original source. OAuth and OpenID Connect, for example, explicitly redirect the user to the identity provider (IdP) as part of the authentication process. SAML can be more opaque, performing these validations through backend calls that may not be obvious to the user. In all these cases, the result is the same: the issuer is aware of the credential's use, creating a trail of user activity that can be observed, logged, and potentially acted upon.

Some verifiable credential systems can operate similarly, enabling the issuer to learn where and when credentials are used. OpenID for Verifiable Credential Issuance (OpenID4VC), for example, inherits these patterns from OpenID and can allow for issuer visibility into credential presentations. But this is a design choice, not a necessity. For example, the verifiable credential presentation protocol in Anoncreds is designed to avoid these pitfalls, enabling credential verification and even revocation checks without contacting the issuer—preserving privacy without sacrificing trust.

Mobile driver's licenses (mDLs) exemplify how this can go wrong. They feel like physical IDs—familiar, simple, and discreet—but unlike handing over a plastic card, an mDL may rely on server retrieval to validate the credential in real time. This means that governments could know when and where you use your license, and in some implementations, could even grant or deny permission for its use. The result is a powerful mechanism for surveillance, packaged in the form of a seemingly benign, everyday artifact.

The American Association of Motor Vehicle Administrators (AAMVA) has acknowledged the privacy concerns associated with server retrieval mode in mDLs. In their December 2024 Implementation Guidelines (version 1.4), they warned about the tracking potential of this mode. Subsequently, in version 1.5, they prohibited the practice. But, as Timothy Ruff argues in Phone Home is Bad. Really Bad, many systems still support it, and the prohibition is simply a policy choice that could be reversed.

The usual justification for "phoning home" is the need to verify that a credential is still valid or hasn't been revoked. But this function doesn't require building surveillance into the architecture. Cryptographic techniques like revocation registries, signed timestamps, and status lists enable real-time verification without ever contacting the issuer. These methods let verifiers check credential status in a privacy-preserving way, ensuring both trust and autonomy. In fact, this is not just possible, it's already being done. Many projects in the self-sovereign identity space routinely demonstrate how to maintain trust without compromising privacy.

These "phone home" systems risk turning identity into an instrument of control. By embedding surveillance into the plumbing of digital trust, they invert the foundational goal of identity systems: to empower the individual.

Build the Future You Want to Live In

The choice to build digital identity systems that don't phone home is ultimately a choice about the kind of society we want to live in. Do we want a world where every credential presentation creates a record, where silent connections to central servers allow for invisible oversight, and where the potential for control is built into the foundation of everyday interactions?

The No Phone Home campaign isn't just about technical standards—it's about civic architecture. It asks us to reject the logic of surveillance and embrace designs that respect human dignity. As our daily lives increasingly rely on digital intermediaries, we have a narrow window to get this right.

By insisting on architectures that protect privacy by design—not just by policy—we build a future where technology empowers rather than controls. That's a future worth fighting for.


Photo Credit: Phoning Home from DALL-E (public domain)


Leaving AWS

Leaving AWS

At the end of April, I wrapped up my time at AWS. I joined in September 2022, stepping into the world of AWS Identity, where I worked on authorization and related areas like Zero Trust. It was a deeply rewarding experience. I got a front-row seat to the sheer professionalism and operational excellence it takes to run a cloud service at that scale. The bar is high, and I came away with a renewed appreciation for what it means to build for resilience, security, and speed—at the same time, and without compromise.

For the past 20 months, we've been living in Virginia while I led a team of developers at HQ2, Amazon's second headquarters in Arlington. That's ultimately what made this decision necessary. As much as I loved the work and the people, we've long felt the pull of home. Utah is where our family is, and where we wanted to be. With AWS's return-to-office mandates and no local office in Utah, something had to give. In the end, family won. No regrets there.

I'm especially grateful to Neha Rungta, who brought me into AWS. Neha and I go way back—I knew her when she was pursuing her PhD in computer science at BYU. She's a remarkable leader, and AWS is fortunate to have her. I appreciate the trust she placed in me and the opportunity to be part of something as consequential as AWS Identity.

So, what's next? I'm not retired—but for now, my time is my own. I'm working on a book for Manning about authorization, a topic that's increasingly critical as digital systems become more interconnected and identity-aware. I'm also staying engaged with the identity community through the Internet Identity Workshop (IIW), which continues to be a wellspring of innovation and collaboration.

Recently, we launched the IIW Foundation, a 501(c)(3) nonprofit dedicated to advancing open empowering approaches to digital identity. Our mission is to support not only the flagship IIW events but also IIW-Inspired™ regional gatherings around the world. There's more to come on that front, and I'll share details in future posts.

Stepping away from AWS wasn't easy, but it was the right move. And as I turn the page, I'm excited about the work ahead—and grateful for the journey so far.


Photo Credit: Leaving AWS from DALL-E (public domain)