Releases: systeminit/swamp
swamp 20260308.211941.0-sha.102d6392
What's Changed
- feat: add interactive secret prompt to vault put (#654)
Summary
- When a user runs
swamp vault put <vault> KEYin an interactive terminal (TTY, no=, no piped stdin), they are now prompted to enter the value with echo suppression — keeping secrets out of both shell history and the visible terminal - Extracted a shared
readSecretFromTty()utility insrc/infrastructure/io/stdin_reader.tsand refactoredauth_login.tsto use it, removing duplicated raw-mode input code - Updated vault skill documentation (
SKILL.mdandreferences/examples.md) to describe all three input methods
Motivation
Previously, vault put only supported two ways to provide a secret value:
- Inline:
swamp vault put my-vault API_KEY=secret— convenient but exposes the secret in shell history - Piped stdin:
echo "secret" | swamp vault put my-vault API_KEY— secure but less discoverable
Users who simply run swamp vault put my-vault API_KEY in a terminal got an error telling them to use one of the above formats. The interactive prompt is a natural third option that is both secure (hidden input) and ergonomic (no shell history exposure, no pipe gymnastics).
How it works
The resolution order in the command action is:
- Has
=? → parse asKEY=VALUE(unchanged) - No
=+ stdin is piped? → read value from stdin (unchanged) - No
=+ stdin is TTY + log mode? → prompt withreadSecretFromTty()(new) - No
=+ JSON mode + no pipe? → error with usage hint (unchanged)
This means all existing behaviour is preserved exactly — the interactive prompt only activates when the user is at a TTY and doesn't provide a value through any other method.
Testing
All 16 existing unit tests pass (deno run test src/cli/commands/vault_put_test.ts).
Additionally, the compiled binary was tested end-to-end against a fresh repo:
# Setup
mkdir /tmp/swamp-test-interactive
cd /tmp/swamp-test-interactive
swamp repo init
swamp vault create local_encryption test-vault
# Test 1: Inline KEY=VALUE (existing behaviour) ✅
swamp vault put test-vault MY_KEY=inline-secret
# → Stored secret "MY_KEY" in vault "test-vault"
# Test 2: Piped stdin (existing behaviour) ✅
echo "piped-secret-value" | swamp vault put test-vault PIPED_KEY
# → Stored secret "PIPED_KEY" in vault "test-vault"
# Test 3: Overwrite protection with piped stdin (existing behaviour) ✅
echo "new-value" | swamp vault put test-vault MY_KEY
# → Error: Secret 'MY_KEY' already exists... Use --force (-f)
# Test 4: Overwrite with --force (existing behaviour) ✅
echo "new-value" | swamp vault put test-vault MY_KEY --force
# → Stored secret "MY_KEY" in vault "test-vault"
# Test 5: Verified both secrets stored correctly ✅
swamp vault list-keys test-vault
# → MY_KEY, PIPED_KEY (2 keys)The interactive TTY prompt path (swamp vault put test-vault KEY with no value) cannot be tested in a non-TTY CI environment, but it uses the same readSecretFromTty function that powers auth login password input, which is proven in production.
Test plan
- All 16 existing
vault_put_test.tsunit tests pass -
deno checkpasses -
deno lintpasses -
deno fmtpasses - Compiled binary tested: inline
KEY=VALUEworks - Compiled binary tested: piped stdin works
- Compiled binary tested: overwrite protection works
- Compiled binary tested:
--forceoverwrite works - Manual test: interactive TTY prompt with hidden input
🤖 Generated with Claude Code
Installation
macOS (Apple Silicon):
curl -L https://github.com/systeminit/swamp/releases/download/v20260308.211941.0-sha.102d6392/swamp-darwin-aarch64 -o swamp
chmod +x swamp && sudo mv swamp /usr/local/bin/macOS (Intel):
curl -L https://github.com/systeminit/swamp/releases/download/v20260308.211941.0-sha.102d6392/swamp-darwin-x86_64 -o swamp
chmod +x swamp && sudo mv swamp /usr/local/bin/Linux (x86_64):
curl -L https://github.com/systeminit/swamp/releases/download/v20260308.211941.0-sha.102d6392/swamp-linux-x86_64 -o swamp
chmod +x swamp && sudo mv swamp /usr/local/bin/Linux (aarch64):
curl -L https://github.com/systeminit/swamp/releases/download/v20260308.211941.0-sha.102d6392/swamp-linux-aarch64 -o swamp
chmod +x swamp && sudo mv swamp /usr/local/bin/swamp 20260308.023116.0-sha.09e423d3
What's Changed
Summary
- Add Scenario 5: Factory Pattern for Model Reuse to
swamp-model/references/scenarios.md— covers when to use factory vs separate models, model definition with inputs schema, thename↔ data name connection, create/delete workflow snippets, and input requirements for delete - Add Factory Model Patterns section to
swamp-workflow/references/data-chaining.md— covers calling one model multiple times, referencing factory instance data downstream, and delete step requirements with before/after examples - Add brief pointer in
swamp-model/SKILL.mdafter the Model Inputs Schema section
Key insight documented: globalArgument expressions are selectively evaluated — inputs not provided are skipped, and a runtime error only occurs if the method code actually tries to access an unresolved globalArgument. Delete steps always need instanceName (to key the data instance) but other create-time inputs are only required if the delete method implementation accesses them.
Closes #650
Test plan
- Documentation-only change, no code modified
- Verified all anchor links resolve within the files
- Verified line counts stay reasonable (SKILL.md: 550 lines)
🤖 Generated with Claude Code
Installation
macOS (Apple Silicon):
curl -L https://github.com/systeminit/swamp/releases/download/v20260308.023116.0-sha.09e423d3/swamp-darwin-aarch64 -o swamp
chmod +x swamp && sudo mv swamp /usr/local/bin/macOS (Intel):
curl -L https://github.com/systeminit/swamp/releases/download/v20260308.023116.0-sha.09e423d3/swamp-darwin-x86_64 -o swamp
chmod +x swamp && sudo mv swamp /usr/local/bin/Linux (x86_64):
curl -L https://github.com/systeminit/swamp/releases/download/v20260308.023116.0-sha.09e423d3/swamp-linux-x86_64 -o swamp
chmod +x swamp && sudo mv swamp /usr/local/bin/Linux (aarch64):
curl -L https://github.com/systeminit/swamp/releases/download/v20260308.023116.0-sha.09e423d3/swamp-linux-aarch64 -o swamp
chmod +x swamp && sudo mv swamp /usr/local/bin/swamp 20260308.021843.0-sha.ac378830
What's Changed
- docs: add polling and idempotent-create patterns to extension model skill (#651)
Summary
Adds two opt-in patterns to the swamp-extension-model skill for extension models that manage real cloud resources:
-
Polling to completion — When a provider's API is async, poll until the resource is fully provisioned before returning from
create/update/delete. EnsureswriteResource()stores complete data so downstream CEL expressions resolve to real values (IPs, ARNs, endpoints), not placeholders like"pending". -
Idempotent creates — Check
context.dataRepository.getContent()for existing state before calling the provider's create API, then verify the resource still exists at the provider. Prevents duplicate resources when workflows are re-run after partial failures.
Why documentation, not framework changes
Both issues proposed framework-level solutions (waitFor config, createOrGet helpers, workflow-level wait steps). After working through the AWS CloudControl provider implementation, it's clear these are best handled at the model level:
-
Every provider has different readiness semantics — AWS uses request tokens and operation status polling, Hetzner has action IDs, DigitalOcean has action endpoints with status fields. A generic
waitForin swamp core would either be too simplistic or end up reimplementing what each model needs anyway. -
Idempotency checks are inherently provider-specific — The model knows what field contains the resource identifier (
id,VpcId,Arn,slug), how to verify existence (a provider-specific GET call), and whether the underlying API is already idempotent. A framework-levelskip-if-data-existswould be fragile — stale data from a deleted resource would silently skip creation. -
The primitives already exist —
context.dataRepository.getContent()is available in every model method. No new framework code needed.
Both patterns are framed as opt-in — the skill tells Claude to ask the user whether they want polling/idempotency when creating a new extension model, rather than mandating them for every model.
What changed
| File | Change |
|---|---|
.claude/skills/swamp-extension-model/SKILL.md |
Added "Optional Patterns for Cloud/API Models" subsection under CRUD Lifecycle Models with links to examples |
.claude/skills/swamp-extension-model/references/examples.md |
Added "Polling to Completion" section with withRetry/pollUntilReady helpers and load balancer example; added "Idempotent Creates" section with droplet example using dataRepository.getContent() |
SKILL.md stays at 470 lines (under the 500-line skill-creator guideline). No content duplication — SKILL.md has decision guidance, examples.md has the full patterns and code.
Both files are already in BUNDLED_SKILLS in skill_assets.ts, so updates ship automatically via swamp repo init and swamp repo upgrade.
🤖 Generated with Claude Code
Installation
macOS (Apple Silicon):
curl -L https://github.com/systeminit/swamp/releases/download/v20260308.021843.0-sha.ac378830/swamp-darwin-aarch64 -o swamp
chmod +x swamp && sudo mv swamp /usr/local/bin/macOS (Intel):
curl -L https://github.com/systeminit/swamp/releases/download/v20260308.021843.0-sha.ac378830/swamp-darwin-x86_64 -o swamp
chmod +x swamp && sudo mv swamp /usr/local/bin/Linux (x86_64):
curl -L https://github.com/systeminit/swamp/releases/download/v20260308.021843.0-sha.ac378830/swamp-linux-x86_64 -o swamp
chmod +x swamp && sudo mv swamp /usr/local/bin/Linux (aarch64):
curl -L https://github.com/systeminit/swamp/releases/download/v20260308.021843.0-sha.ac378830/swamp-linux-aarch64 -o swamp
chmod +x swamp && sudo mv swamp /usr/local/bin/swamp 20260308.001236.0-sha.c90aba3a
What's Changed
Summary
Fixes #645. PR #640 introduced a deleted-resource lifecycle check (#637) that calls findAllForModel() and throws if any data entry has lifecycle: "deleted". This is too broad — models accumulate multiple data entries over time (old data names, VPC IDs as data names, etc.), and old deleted entries cause false positives that block read/update operations on active resources.
Why This Is the Right Fix
The root cause is that the lifecycle check operates on all data entries for a model, but it should only consider data that belongs to declared resource specs. The writeResource() function already auto-injects a specName tag on every resource write (line 493 in data_writer.ts), so the infrastructure to distinguish declared resource data from historical/untagged data already exists — it just wasn't being used.
This fix:
-
Adds
filterDeclaredResourceData()helper — filters data entries to only those withtags.type === "resource"ANDtags.specNamematching a key in the model's declaredresourcesmap. -
Scopes the read/update pre-check — instead of "fail if ANY data is deleted", the semantic is now "fail if ALL declared-resource data is deleted." Old historical entries without
specNametags are excluded from the check entirely. -
Scopes deletion marker writing — the
deletemethod now only writes deletion markers for declared resource data, not for untagged historical entries.
Why "all deleted" instead of "any deleted"
A model may have multiple declared resource specs. If one is deleted but another is active, blocking the entire model is too aggressive — the active resource should still be readable. The check now only blocks when every single declared resource data entry is deleted, which means the model genuinely has no active resources.
Backwards compatibility
Data created before PR #640 won't have specName tags. This is fine — those entries are excluded from the filter, so they can't trigger false positives. The lifecycle check only activates for data written by the current writeResource() which always injects specName.
User Impact
- Bug fixed: Models that accumulate data entries over multiple workflow runs no longer get falsely blocked on
read/updatewith "Resource was deleted" errors - No breaking changes: The #637 protection is preserved — genuinely deleted resources (all declared resource data marked deleted) still block read/update and require a
createto re-create - No action required: Existing repos work without migration — pre-existing data without
specNametags is simply excluded from the check
Files Changed
| File | Change |
|---|---|
src/domain/models/method_execution_service.ts |
Add filterDeclaredResourceData() helper; scope read/update pre-check and deletion marker writing to declared resource data |
src/domain/models/method_execution_service_test.ts |
Add specName tags to 5 existing test data entries; add 3 new tests for the scoped behavior |
Testing
- All 2772 tests pass (32 in method_execution_service_test.ts, including 3 new)
- New tests cover:
deno check,deno lint,deno fmtall passdeno run compilesucceeds
🤖 Generated with Claude Code
Installation
macOS (Apple Silicon):
curl -L https://github.com/systeminit/swamp/releases/download/v20260308.001236.0-sha.c90aba3a/swamp-darwin-aarch64 -o swamp
chmod +x swamp && sudo mv swamp /usr/local/bin/macOS (Intel):
curl -L https://github.com/systeminit/swamp/releases/download/v20260308.001236.0-sha.c90aba3a/swamp-darwin-x86_64 -o swamp
chmod +x swamp && sudo mv swamp /usr/local/bin/Linux (x86_64):
curl -L https://github.com/systeminit/swamp/releases/download/v20260308.001236.0-sha.c90aba3a/swamp-linux-x86_64 -o swamp
chmod +x swamp && sudo mv swamp /usr/local/bin/Linux (aarch64):
curl -L https://github.com/systeminit/swamp/releases/download/v20260308.001236.0-sha.c90aba3a/swamp-linux-aarch64 -o swamp
chmod +x swamp && sudo mv swamp /usr/local/bin/swamp 20260307.224826.0-sha.df22793b
What's Changed
- fix: skip unresolvable cross-model resource expressions in globalArguments (#644)
Summary
Fixes #641. When a model's globalArguments contain CEL expressions referencing another model's resource or file data (e.g., ${{ model.ssh-key.resource.state.key.attributes.id }}), method execution crashes with No such key: resource if the referenced model has no data on disk. This prevents methods like update from running even when they don't need the unresolvable field.
What Changed
1. Expression evaluation skips unresolvable resource/file references
In ExpressionEvaluationService.evaluateDefinition(), expressions referencing model.*.resource or model.*.file are now skipped when the context proves the data doesn't exist (the referenced model was never executed or its data was cleaned up). This extends the existing pattern that already skips unresolved inputs.* references.
The raw ${{ ... }} expression is preserved in the definition — it is not silently discarded.
2. Broadened unresolved-expression detection in globalArgs validation
In DefaultMethodExecutionService.executeWorkflow(), the check for unresolved globalArgs fields now catches any remaining ${{ ... }} expression, not just inputs.* ones. This is safe because vault/env expressions are always resolved before executeWorkflow() is called (both in the CLI path via model_method_run.ts:282 and the workflow path via execution_service.ts:402).
3. Fixed pre-existing silent data corruption bug in globalArgs Proxy
The Proxy on context.globalArgs previously only threw for unresolved inputs.* expressions. If a globalArg contained an unresolved model.*.resource expression, the Proxy would silently return the raw ${{ ... }} string to the method. A model method could unknowingly send that literal string to an API. The Proxy now throws for any unresolved expression:
Unresolved expression in globalArguments.ssh_keys: ${{ model["ssh-key"].resource.state... }}
Design Rationale
Follows an established pattern. The codebase already skips unresolved inputs.* expressions in evaluateDefinition() (lines 214-229). The model.*.resource case is structurally identical — an expression references context data that doesn't exist yet. Extending the same skip logic is a natural, consistent extension.
The safety net already exists. The Proxy on context.globalArgs catches any attempt to use an unresolved expression at runtime. The flow is:
- Expression can't be resolved → skip it, leave the raw
${{ ... }}in the definition - If the method actually needs that field → Proxy throws a clear error
- If the method doesn't need that field → everything works fine
buildContext() already loads data from disk. When dataRepo is provided (which it is in both CLI and workflow paths), ModelResolver.buildContext() eagerly populates model.*.resource from on-disk data. If the data exists, the expression resolves normally. The skip only triggers when data genuinely doesn't exist.
Alternative approaches are worse:
- "Only evaluate fields the method schema needs" — requires the evaluation service to understand method schemas, a layer violation
- "Make
model.*.resourcepersistently available" — already happens; the problem is the referenced model has no data at all - "Allow
data.latest()in definitions" — pushes complexity to users
User Impact
- No breaking changes. Every expression that resolved before still resolves. The skip logic only triggers when the CEL evaluation would have crashed anyway.
- What was broken now works. Models with cross-model resource references in
globalArgumentscan run methods that don't need those fields (e.g.,updatedoesn't needssh_keys). - Better error messages. If a method accesses an unresolved field, the error is now
Unresolved expression in globalArguments.ssh_keys: ${{ ... }}instead of the crypticNo such key: resource. - Fixes silent corruption. Unresolved non-input expressions can no longer silently leak through the Proxy as raw strings.
Files Changed
| File | Change |
|---|---|
src/domain/expressions/expression_evaluation_service.ts |
Skip expressions with unresolvable model.*.resource/model.*.file deps |
src/domain/models/method_execution_service.ts |
Broaden globalArgs unresolved detection + fix Proxy to catch all expressions |
src/domain/models/method_execution_service_test.ts |
3 new unit tests for Proxy and globalArgs validation behavior |
integration/keeb_shell_model_test.ts |
Integration test: model method run with unresolvable cross-model resource ref |
.claude/skills/swamp-model/references/troubleshooting.md |
Document new error message and updated behavior |
Testing
- All 2769 existing tests pass
- 3 new unit tests covering Proxy throws, Proxy allows resolved fields, and executeWorkflow skips validation
- 1 new integration test verifying standalone method run succeeds with unresolvable cross-model resource expression
- Manually verified against reproduction case in
/tmp/swamp-641-test
Installation
macOS (Apple Silicon):
curl -L https://github.com/systeminit/swamp/releases/download/v20260307.224826.0-sha.df22793b/swamp-darwin-aarch64 -o swamp
chmod +x swamp && sudo mv swamp /usr/local/bin/macOS (Intel):
curl -L https://github.com/systeminit/swamp/releases/download/v20260307.224826.0-sha.df22793b/swamp-darwin-x86_64 -o swamp
chmod +x swamp && sudo mv swamp /usr/local/bin/Linux (x86_64):
curl -L https://github.com/systeminit/swamp/releases/download/v20260307.224826.0-sha.df22793b/swamp-linux-x86_64 -o swamp
chmod +x swamp && sudo mv swamp /usr/local/bin/Linux (aarch64):
curl -L https://github.com/systeminit/swamp/releases/download/v20260307.224826.0-sha.df22793b/swamp-linux-aarch64 -o swamp
chmod +x swamp && sudo mv swamp /usr/local/bin/swamp 20260307.222421.0-sha.8ee10347
What's Changed
- fix: add --json to all non-interactive CLI examples in skill files (#643)
Summary
- Added
--jsonflag to all non-interactive CLI command examples across 11 skill files - Skill files are the primary way the agent interacts with the CLI — commands missing
--jsoncause the agent to receive human-readable log output instead of structured JSON, making parsing unreliable - Interactive commands (
edit, bareissue bug/feature) are intentionally left unchanged since the agent cannot use them
Files changed: swamp-data, swamp-repo, swamp-extension-model, swamp-issue, swamp-vault, and swamp-model skill files and their references.
Test Plan
- Verified all non-interactive commands now include
--json - Verified no
--jsonwas added to interactiveeditcommands - Ran
deno fmtto ensure formatting is correct
🤖 Generated with Claude Code
Installation
macOS (Apple Silicon):
curl -L https://github.com/systeminit/swamp/releases/download/v20260307.222421.0-sha.8ee10347/swamp-darwin-aarch64 -o swamp
chmod +x swamp && sudo mv swamp /usr/local/bin/macOS (Intel):
curl -L https://github.com/systeminit/swamp/releases/download/v20260307.222421.0-sha.8ee10347/swamp-darwin-x86_64 -o swamp
chmod +x swamp && sudo mv swamp /usr/local/bin/Linux (x86_64):
curl -L https://github.com/systeminit/swamp/releases/download/v20260307.222421.0-sha.8ee10347/swamp-linux-x86_64 -o swamp
chmod +x swamp && sudo mv swamp /usr/local/bin/Linux (aarch64):
curl -L https://github.com/systeminit/swamp/releases/download/v20260307.222421.0-sha.8ee10347/swamp-linux-aarch64 -o swamp
chmod +x swamp && sudo mv swamp /usr/local/bin/swamp 20260307.015433.0-sha.94e4b27d
What's Changed
- fix: allow reserved collective members to push extensions (#642)
fix: allow reserved collective members to push extensions
Summary
- Moves the reserved collective (
@swamp,@si) check from manifest schema validation to the push command's authorization flow - Legitimate members of the
swamporsicollectives (verified viaswamp auth whoami) can now push extensions scoped to those collectives - Non-members are still rejected with a clear error message listing their available collectives
Problem
Running swamp extension push manifest.yaml with a @swamp/ or @si/ scoped extension was unconditionally rejected during Zod schema parsing — before the user's collective membership was ever checked. This made it impossible for actual collective members to publish official extensions.
What changed
Manifest validation (extension_manifest.ts)
- Removed the
.refine()that rejected reserved collectives at parse time - Removed the now-unused
ModelTypeimport - The manifest parser now only validates structure (format, required fields), not authorization
Push command (extension_push.ts)
- Added
ModelType.isReservedCollective()check in the existing collective membership validation - If the extension uses a reserved collective but the server can't be reached to verify membership, the push is rejected (no username fallback for reserved collectives)
- If the server confirms membership, the push proceeds normally
Tests (extension_manifest_test.ts)
- Replaced the two rejection tests with acceptance tests verifying
@swamp/and@si/names are valid at the manifest level
Security boundaries preserved
- Server-side membership verification is required for reserved collectives — the username fallback path is explicitly blocked
- Non-members are still rejected — the existing
isAllowedcheck validates collective membership via the API - Network failures are safe — if the server can't be reached for a reserved collective, the push fails closed with a clear error
- All other validation unchanged — scoped name format, CalVer version, content collective matching, safety analysis, and quality checks remain in place
Test plan
-
deno checkpasses -
deno lintpasses -
deno fmtpasses - All 2733 tests pass (
deno run test) - Manual:
swamp extension pushwith@swamp/extension as a collective member succeeds - Manual:
swamp extension pushwith@swamp/extension as a non-member is rejected
🤖 Generated with Claude Code
Installation
macOS (Apple Silicon):
curl -L https://github.com/systeminit/swamp/releases/download/v20260307.015433.0-sha.94e4b27d/swamp-darwin-aarch64 -o swamp
chmod +x swamp && sudo mv swamp /usr/local/bin/macOS (Intel):
curl -L https://github.com/systeminit/swamp/releases/download/v20260307.015433.0-sha.94e4b27d/swamp-darwin-x86_64 -o swamp
chmod +x swamp && sudo mv swamp /usr/local/bin/Linux (x86_64):
curl -L https://github.com/systeminit/swamp/releases/download/v20260307.015433.0-sha.94e4b27d/swamp-linux-x86_64 -o swamp
chmod +x swamp && sudo mv swamp /usr/local/bin/Linux (aarch64):
curl -L https://github.com/systeminit/swamp/releases/download/v20260307.015433.0-sha.94e4b27d/swamp-linux-aarch64 -o swamp
chmod +x swamp && sudo mv swamp /usr/local/bin/swamp 20260306.211430.0-sha.49d4f8dd
What's Changed
Problem
When a user runs a delete method on a model (e.g., swamp model method run my-widget delete), the cloud resource is destroyed but swamp's local data store retains no record of the deletion. This creates two problems:
-
Doomed API calls: Subsequent
getorupdatemethods happily execute against a resource that no longer exists, resulting in raw cloud provider errors (HTTP 404, "resource not found", etc.) that are confusing and unhelpful to the user. -
No deletion history: There is no way to distinguish "this resource was never created" from "this resource existed but was deleted" — the data store simply shows the last active version with no indication that the resource is gone.
User Impact
Before this fix, a typical user experience looks like:
$ swamp model method run my-widget create # ✅ Creates the resource
$ swamp model method run my-widget delete # ✅ Deletes the resource
$ swamp model method run my-widget get # ❌ Raw 404 error from cloud API
$ swamp model method run my-widget update # ❌ Raw 404 error from cloud API
After this fix:
$ swamp model method run my-widget create # ✅ Creates the resource
$ swamp model method run my-widget delete # ✅ Deletes + writes deletion marker
$ swamp model method run my-widget get # ❌ Clear error: "Resource 'widget' was deleted at 2026-03-06T19:00:00Z — run a 'create' method to re-create it first"
$ swamp model method run my-widget update # ❌ Same clear error
$ swamp model method run my-widget create # ✅ Re-creates, clears deletion state
$ swamp model method run my-widget get # ✅ Works again
Solution
1. Method Kind Classification (MethodKind + inferMethodKind())
Added a MethodKind type ("create" | "read" | "update" | "delete" | "list" | "action") and an inferMethodKind() utility that:
- Returns the explicit
kindif set on the method definition - Otherwise infers from conventional names:
create→create,get/read/describe/show→read,update/patch→update,delete/destroy/remove→delete,list/search/find→list - Returns
undefinedfor unrecognized names
Extension models can also set kind explicitly on their method definitions for non-conventional names.
2. Data Lifecycle Tracking (DataLifecycle)
Added a lifecycle field to DataMetadata with values "active" (default) or "deleted". This is backward-compatible — existing data without a lifecycle field is treated as active.
3. Deletion Markers (Tombstones)
After a delete method succeeds, executeWorkflow() writes a new version of each resource with lifecycle: "deleted" and JSON content containing { deletedAt, deletedByMethod }. This acts as a tombstone that records when and how the resource was deleted.
4. Fast-Fail on Deleted Resources
Before executing read or update methods, executeWorkflow() checks for deletion markers. If any resource has lifecycle: "deleted", it throws a UserError with a clear message including the deletion timestamp and instructions to re-create.
5. Create Clears Deletion State
When a create method writes new resource data, Data.create() defaults lifecycle to "active", naturally superseding the deletion marker. The latest symlink points to the new active version.
6. GC Interaction
Deletion markers (tombstones) are permanent — findExpiredData() in the data lifecycle service skips data with lifecycle: "deleted" so they are never auto-expired.
Files Changed
| File | Change |
|---|---|
src/domain/models/model.ts |
MethodKind type, kind field on MethodDefinition, inferMethodKind() utility |
src/domain/models/user_model_loader.ts |
Propagate kind through extension model loader |
src/domain/data/data_metadata.ts |
DataLifecycleSchema, lifecycle field in metadata |
src/domain/data/data.ts |
lifecycle in Data class, isDeleted accessor, withDeletionMarker() factory |
src/domain/data/mod.ts |
Export new types |
src/domain/models/method_execution_service.ts |
Pre-check for deleted resources, post-delete marker writing |
src/domain/data/data_lifecycle_service.ts |
Skip deletion markers in GC |
Tests
Unit Tests (33 new tests)
model_test.ts (8 tests)
inferMethodKindreturns explicit kind from definition- Infers
create,read,update,delete,listfrom conventional names - Returns
undefinedfor unrecognized method names - Explicit
kindoverrides conventional name inference
data_metadata_test.ts (4 tests)
- Schema accepts
"active"and"deleted"lifecycle values - Schema rejects invalid lifecycle values
- Schema accepts
undefined(backward compatibility)
data_test.ts (12 tests)
- Defaults lifecycle to
"active"on create - Creates data with
"deleted"lifecycle isDeletedreturns true/false correctlywithDeletionMarker()creates correct deletion markertoData()includes lifecycle only when"deleted"(saves disk)fromData()defaults to"active"when lifecycle missing- Roundtrip preserves
"deleted"lifecycle withNewVersion()preserves lifecycle
method_execution_service_test.ts (6 tests)
- Delete method writes deletion markers to all resources
- Read after delete throws
UserErrorwith timestamp - Update after delete throws
UserError - Create after delete succeeds (clears deletion state)
- Delete skips already-deleted resources
- Explicit
kindoverride prevents deletion marker writing
data_lifecycle_service_test.ts (2 tests)
- Skips deletion markers in
findExpiredData() - Returns expired active data alongside deletion markers
user_model_loader_test.ts (1 test)
kindpropagated through model definition conversion
E2E Verification
Manually verified the full lifecycle with a test extension model:
- Create → version 1 (active) ✅
- Get → succeeds ✅
- Delete → writes version 2 with
lifecycle: deleted✅ - Get after delete → fails with
UserError✅ - Update after delete → fails with
UserError✅ - Create after delete → writes version 3 (active) ✅
- Get after re-create → succeeds ✅
Test plan
-
deno check— Type checking passes -
deno lint— No lint errors -
deno fmt— Formatted -
deno run test— All 2765 tests pass -
deno run compile— Binary compiles - E2E verification of full create → delete → fail → re-create lifecycle
Closes #637
🤖 Generated with Claude Code
Installation
macOS (Apple Silicon):
curl -L https://github.com/systeminit/swamp/releases/download/v20260306.211430.0-sha.49d4f8dd/swamp-darwin-aarch64 -o swamp
chmod +x swamp && sudo mv swamp /usr/local/bin/macOS (Intel):
curl -L https://github.com/systeminit/swamp/releases/download/v20260306.211430.0-sha.49d4f8dd/swamp-darwin-x86_64 -o swamp
chmod +x swamp && sudo mv swamp /usr/local/bin/Linux (x86_64):
curl -L https://github.com/systeminit/swamp/releases/download/v20260306.211430.0-sha.49d4f8dd/swamp-linux-x86_64 -o swamp
chmod +x swamp && sudo mv swamp /usr/local/bin/Linux (aarch64):
curl -L https://github.com/systeminit/swamp/releases/download/v20260306.211430.0-sha.49d4f8dd/swamp-linux-aarch64 -o swamp
chmod +x swamp && sudo mv swamp /usr/local/bin/swamp 20260306.195336.0-sha.4172137b
What's Changed
- refactor: rename "namespace" to "collective" across extensions, models, and vaults (#639)
Summary
Standardizes terminology across the codebase to use "collective" instead of "namespace" when referring to the swamp organizational unit (@org/name scoped names). "Collective" is the correct ubiquitous language — a user is a member of one or more collectives, and extensions are published to a collective.
Scope rule
Only renames "namespace" → "collective" when it refers to swamp's organizational unit. External concepts (HashiCorp Vault enterprise namespaces, CEL expression namespaces) are left unchanged.
Changes
Domain layer
extension_namespace_validator.ts→extension_collective_validator.ts— renamed file, types (NamespaceMismatch→CollectiveMismatch,NamespaceValidationResult→CollectiveValidationResult), and function (validateContentNamespaces()→validateContentCollectives())extension_manifest.ts— error messages and comments:@namespace/name→@collective/name,reserved namespace→reserved collectivemodel_type.ts—RESERVED_NAMESPACES→RESERVED_COLLECTIVES,isReservedNamespace()→isReservedCollective()user_model_loader.ts—validateUserNamespace()→validateUserCollective()user_vault_loader.ts— comments and error messages updated
CLI commands
extension search—--namespaceflag →--collectiveflagextension push— now validates the manifest collective against the user's actual collectives from the whoami API (via organizations response), with username fallback when the server is unreachable. Blocks the push with a clear error if the collective doesn't match (no more prompt to continue)extension pull/yank/rm— error message format:@namespace/name→@collective/nameauth whoami— now displays collectives (extracted from organizations) in both JSON and log output
Infrastructure
swamp_club_client.ts— addedWhoamiOrganizationinterface,organizationsfield onWhoamiResponse, andgetCollectives()helper that extracts org slugsextension_api_client.ts— search paramnamespace→collective
Presentation
extension_push_output.ts—renderExtensionPushNamespaceErrors()→renderExtensionPushCollectiveErrors(), JSON keynamespaceErrors→collectiveErrors
Documentation & skills
design/extension.md— all organizational "namespace" → "collective".claude/skills/swamp-extension-model/— SKILL.md, publishing.md, troubleshooting.md updated.claude/skills/swamp-vault/— SKILL.md, troubleshooting.md, user-defined-vaults.md updated (only swamp org references, not HashiCorp Vault namespace refs)
User-facing changes
-
swamp extension search: The--namespaceflag is now--collective# Before swamp extension search --namespace stack72 # After swamp extension search --collective stack72
-
swamp extension push: Collective validation now checks against your actual collectives from the server (not just username). If the manifest collective isn't one of yours, the push is blocked with a clear error listing your available collectives:Extension collective "@swampadmin" is not one of your collectives (@swamp, @system-initiative, @stack72). Use one of: @swamp, @system-initiative, @stack72 -
swamp auth whoami: Now shows collectives in both output modes:stack72 (stack72@example.com) on https://club.swamp.sh Collectives: swamp, system-initiative, stack72 -
Error messages: All error messages now use "collective" instead of "namespace" (e.g.,
reserved collectiveinstead ofreserved namespace)
Test plan
- All 2733 tests pass
-
deno check— type checking passes -
deno lint— linting passes -
deno fmt— formatting passes -
deno run compile— binary compiles - Manual testing of
swamp extension pushwith collective validation
🤖 Generated with Claude Code
Installation
macOS (Apple Silicon):
curl -L https://github.com/systeminit/swamp/releases/download/v20260306.195336.0-sha.4172137b/swamp-darwin-aarch64 -o swamp
chmod +x swamp && sudo mv swamp /usr/local/bin/macOS (Intel):
curl -L https://github.com/systeminit/swamp/releases/download/v20260306.195336.0-sha.4172137b/swamp-darwin-x86_64 -o swamp
chmod +x swamp && sudo mv swamp /usr/local/bin/Linux (x86_64):
curl -L https://github.com/systeminit/swamp/releases/download/v20260306.195336.0-sha.4172137b/swamp-linux-x86_64 -o swamp
chmod +x swamp && sudo mv swamp /usr/local/bin/Linux (aarch64):
curl -L https://github.com/systeminit/swamp/releases/download/v20260306.195336.0-sha.4172137b/swamp-linux-aarch64 -o swamp
chmod +x swamp && sudo mv swamp /usr/local/bin/swamp 20260306.023804.0-sha.0f6f4d7e
What's Changed
Summary
Fixes #626. Input validation previously enforced all required inputs unconditionally before method execution. This caused methods like delete to fail on extension models with CRUD patterns because create-time inputs (e.g., dropletName, region) weren't provided — even though delete doesn't use them.
Problem
Extension models define inputs in globalArguments with ${{ inputs.* }} expressions. The InputValidationService checked ALL required inputs before any method could run, making it impossible to run methods that don't need those inputs.
For example, a @test/droplet extension model with:
inputs.required: [dropletName, region, tagName, vpcName]globalArgumentsreferencing all four inputscreatemethod usingcontext.globalArgs(needs all inputs)deletemethod not accessingcontext.globalArgs(needs no inputs)
Running swamp model method run web-droplet delete would fail with input validation errors for dropletName, region, etc.
Approach: Three-Layer Fix
The fix required changes at three different layers because inputs flow through multiple stages before reaching method code:
Layer 1: CLI Validation (src/cli/commands/model_method_run.ts)
- Scan the method's YAML arguments for
${{ inputs.* }}expression references - Filter the
requiredarray to only include inputs the method references - Type/enum validation still runs for any inputs the user provides
- Defaults are still applied from the original schema
Layer 2: CEL Expression Evaluation (src/domain/expressions/expression_evaluation_service.ts)
- Skip evaluating CEL expressions that reference inputs not provided by the user
- Without this, the CEL evaluator would crash with
No such key: apiTokenwhen trying to resolve${{ inputs.apiToken }}for a method that doesn't need it - Follows the same pattern as the existing vault/env runtime expression skipping
Layer 3: Runtime Guard (src/domain/models/method_execution_service.ts)
- Wrap
context.globalArgsin a Proxy that throws a clear error if a method accesses a field containing an unresolved${{ inputs.* }}expression - Strip unresolved fields from globalArguments before Zod schema validation (otherwise validation fails on raw expression strings)
- This catches the case where extension model TypeScript code tries to use
context.globalArgs.dropletNamebut the user didn't provide that input
Supporting Code (src/domain/expressions/expression_parser.ts)
- New
extractInputReferencesFromCel()— extracts input field names from a single CEL expression string - New
extractInputReferences()— scans a data structure for${{ }}expressions and returns the set ofinputs.*fields referenced - Handles both dot notation (
inputs.fieldName) and bracket notation (inputs["field-name"]) - Uses negative lookbehind to exclude cross-model references (
model.foo.input.bar)
Why all three layers?
A single-layer fix was insufficient:
- Layer 1 alone — Doesn't help extension models where inputs flow through
globalArguments→ TypeScript code, not YAML method arguments - Layer 1 + Layer 2 alone — Validation passes and CEL evaluation succeeds, but Zod schema validation in
executeWorkflow()fails on raw${{ inputs.* }}strings, and if that's bypassed,createsilently produces data with unresolved expression strings instead of failing - All three layers — Validation is method-aware, CEL gracefully skips missing inputs, and the runtime Proxy catches actual access to unresolved fields with a clear error message
Binary Testing
After compiling, tested the binary against a reproduction repo (/tmp/swamp-626-test/) with a @test/droplet extension model matching the exact scenario from issue #626:
| Scenario | Expected | Result |
|---|---|---|
swamp model method run web-droplet delete (no inputs) |
Exit 0 — delete doesn't use globalArgs | Exit 0 |
swamp model method run web-droplet create (no inputs) |
Exit 1 — clear error about missing inputs | Exit 1: Missing required input(s): dropletName (needed by globalArguments.dropletName) |
swamp model method run web-droplet create --input dropletName=test --input region=nyc3 --input tagName=demo --input vpcName=my-vpc |
Exit 0 — data persisted correctly | Exit 0, data written to .swamp/data/ |
Test plan
- 7 new unit tests for
extractInputReferences()inexpression_parser_test.ts— dot/bracket notation, deduplication, cross-model exclusion, nested structures - 3 new integration tests in
inputs_test.ts— unreferenced required inputs succeed, referenced required inputs still validated, globalArguments inputs don't block unrelated methods - Full test suite: 2733/2733 passed
-
deno check— passed -
deno lint— passed -
deno fmt— passed -
deno run compile— passed - Compiled binary tested against reproduction repo (3 scenarios above)
🤖 Generated with Claude Code
Installation
macOS (Apple Silicon):
curl -L https://github.com/systeminit/swamp/releases/download/v20260306.023804.0-sha.0f6f4d7e/swamp-darwin-aarch64 -o swamp
chmod +x swamp && sudo mv swamp /usr/local/bin/macOS (Intel):
curl -L https://github.com/systeminit/swamp/releases/download/v20260306.023804.0-sha.0f6f4d7e/swamp-darwin-x86_64 -o swamp
chmod +x swamp && sudo mv swamp /usr/local/bin/Linux (x86_64):
curl -L https://github.com/systeminit/swamp/releases/download/v20260306.023804.0-sha.0f6f4d7e/swamp-linux-x86_64 -o swamp
chmod +x swamp && sudo mv swamp /usr/local/bin/Linux (aarch64):
curl -L https://github.com/systeminit/swamp/releases/download/v20260306.023804.0-sha.0f6f4d7e/swamp-linux-aarch64 -o swamp
chmod +x swamp && sudo mv swamp /usr/local/bin/