Skip to content

Tags: systeminit/swamp

Tags

v20260217.115044.0-sha.7678da9c

Toggle v20260217.115044.0-sha.7678da9c's commit message

Verified

This commit was created on GitHub.com and signed with GitHub’s verified signature.
docs: Update README to reflect multi-tool support (#365)

Updates the README to reflect first-class support for Claude Code,
Cursor, OpenCode, and Codex following the changes introduced in #358.

- Quick Start: Added --tool flag examples for all four supported tools
- Using Swamp with AI Agents: Replaced Claude-centric prose with a tool
comparison table showing init flags, skills directories, and
instructions files for each tool; added swamp repo upgrade --tool
example for switching tools
- User-Defined Models: Removed hard-coded .claude/ path in the skill
guide link, making the reference tool-agnostic

v20260216.220214.0-sha.8945aba5

Toggle v20260216.220214.0-sha.8945aba5's commit message

Verified

This commit was created on GitHub.com and signed with GitHub’s verified signature.
feat: Add telemetryDisabled config to .swamp.yaml (#364)

## Summary

- Add `telemetryDisabled?: boolean` to `RepoMarkerData` so users can
persistently opt out of telemetry via `.swamp.yaml` instead of passing
`--no-telemetry` on every invocation
- Add `isTelemetryDisabledByConfig()` helper in `src/cli/mod.ts` and
check it in `initTelemetryService()` before creating any telemetry
infrastructure
- Add Telemetry section to README.md documenting the exact payload
shape, redaction policy, and both opt-out methods

Closes #362

## Test Plan

- Added YAML roundtrip test for `telemetryDisabled: true` in
`repo_marker_repository_test.ts`
- Added 4 unit tests for `isTelemetryDisabledByConfig()` covering: null
marker, field absent, field false, field true
- All 1722 tests pass, `deno check`, `deno lint`, `deno fmt` clean

v20260216.215050.0-sha.bc3e465e

Toggle v20260216.215050.0-sha.bc3e465e's commit message

Verified

This commit was created on GitHub.com and signed with GitHub’s verified signature.
fix: use recursive key sorting in computeHash to include nested data (#…

…363)

## Summary

Fixes #337 — reported by @umag.

`Definition.computeHash()` used `JSON.stringify(data,
Object.keys(data).sort())` to produce deterministic hashes. The second
argument to `JSON.stringify` is a **replacer array** that acts as a key
whitelist applied at **all nesting levels**. This means any nested
object keys not present in the top-level key list (e.g., `message`
inside `globalArguments: { message: "hello" }`) were **silently
dropped** from serialization. The hash was effectively computed over
only scalar top-level fields while nested structures like `tags`,
`globalArguments`, `methods`, and `inputs` all serialized as `{}`.

The fix replaces the replacer array with a **replacer function** that
recursively sorts keys at every depth. `JSON.stringify` calls the
replacer for every value in the object tree, so nested objects at any
depth get their keys sorted and fully serialized. Arrays pass through
unchanged (preserving element order).

The existing test didn't catch this because it relied on auto-generated
UUIDs to differentiate hashes (`Definition.create()` generates a new
`crypto.randomUUID()` each time), meaning hashes would always differ
regardless of whether nested content was included. The test has been
updated to use a fixed ID so it actually validates that different nested
content produces different hashes.

## Why this has no risk to existing workflows

`definitionHash` is used in exactly two places, neither of which
involves decision-making logic:

1. **Provenance metadata in `ModelOutput`** — purely informational,
stored alongside execution results for audit purposes. New runs will
produce new hashes; old data on disk retains its old hashes untouched.
Nothing compares old hashes to new ones.

2. **Backward compatibility in `OwnerDefinition`** — the
`definitionHash` field is retained for compatibility with existing data
on disk, but **ownership is determined solely by `ownerType +
ownerRef`**, as confirmed by `Data.isOwnedBy()` and its tests.

No cache invalidation, no `--last-evaluated` behavior (which loads
cached YAML directly from disk), and no conditional logic anywhere in
the codebase depends on this hash value. Changing the hash output has
zero functional impact on existing workflows or stored data.

## Test plan

- Updated `Definition.computeHash returns different hash for different
nested content` test to use a fixed ID, confirming nested content
differences produce different hashes
- All 36 definition tests pass
- `deno fmt`, `deno lint`, `deno check` all pass

Generated with [Claude Code](https://claude.com/claude-code)

v20260216.213854.0-sha.8a9a688c

Toggle v20260216.213854.0-sha.8a9a688c's commit message

Verified

This commit was created on GitHub.com and signed with GitHub’s verified signature.
fix: use synchronous writes in RunFileSink (#361)

## Summary

Fixes #333 — reported by @umag.

`RunFileSink` used async `fd.write()` in a fire-and-forget pattern with
`.catch(() => {})`, which could theoretically produce out-of-order log
lines and silently swallowed all write errors.

## Plan

As noted in the [issue
discussion](#333 (comment)),
the fix is straightforward: swap `fd.write()` to `fd.writeSync()`. This
works because:

- **LogTape's `Sink` type is synchronous** (`(record: LogRecord) =>
void`) — async writes were a mismatch with the API contract
- **`writeSync` guarantees ordering** — each write completes before the
sink returns, eliminating any possibility of interleaving
- **Performance impact is negligible** — log lines are small and
OS-level sync writes to a single fd are fast
- **Error handling should be silent but present** — logging
infrastructure must not crash workflows, so a try-catch is appropriate
(but not a silent `.catch(() => {})` on a detached promise)

## Implementation

**`src/infrastructure/logging/run_file_sink.ts`**
- Replaced `writer.fd.write(writer.encoder.encode(line)).catch(() =>
{})` with `writer.fd.writeSync(writer.encoder.encode(line))` wrapped in
a try-catch
- The try-catch preserves the "don't crash on write failure" behavior
without silently swallowing errors from detached promises

**`src/infrastructure/logging/run_file_sink_test.ts`**
- Removed all `await new Promise((r) => setTimeout(r, 50))` delays from
7 tests — these existed solely to wait for the async writes to settle.
With synchronous writes, data is on disk by the time `sinkFn()` returns.

## Test Plan

- [x] All 7 existing `RunFileSink` tests pass
- [x] `deno check` passes
- [x] `deno lint` passes
- [x] `deno fmt --check` passes

Co-authored-by: Magistr <2269529+umag@users.noreply.github.com>

v20260216.210434.0-sha.a897dcc4

Toggle v20260216.210434.0-sha.a897dcc4's commit message

Verified

This commit was created on GitHub.com and signed with GitHub’s verified signature.
fix: forEach dependency now waits on all expansions (#360)

## Summary

Fixes #326 — reported by @umag

When two `forEach` steps had a dependency relationship, only the first
expansion of the predecessor was tracked as a dependency
(`depExpanded[0].expandedName`). This meant dependent expansions could
start executing before all predecessor expansions had completed.

**Example:** If step A (`deploy`) and step B (`smoke-test`) both use
`forEach` over `[dev, staging, prod]`, and B depends on A — then
`smoke-test-staging` could start as soon as `deploy-dev` finished, even
though `deploy-staging` was still running.

## Fix

Changed the dependency mapping from `map` to `flatMap` so each expanded
step depends on **all** expansions of its dependency step:

```typescript
// Before (broken — only first expansion):
dependencies: node.dependencies.map((dep) => {
  const depExpanded = expandedStepsMap!.get(dep);
  return depExpanded && depExpanded.length > 0
    ? depExpanded[0].expandedName
    : dep;
}),

// After (fixed — all expansions):
dependencies: node.dependencies.flatMap((dep) => {
  const depExpanded = expandedStepsMap!.get(dep);
  return depExpanded && depExpanded.length > 0
    ? depExpanded.map((d) => d.expandedName)
    : [dep];
}),
```

## Impact

This bug **only** affects the case where a `forEach` step depends on
another `forEach` step. All other dependency combinations are
unaffected:

| Dependency type | Affected? | Why |
|---|---|---|
| Regular → Regular | No | No expansion happens |
| forEach → Regular | No | Regular step has a single expansion, so
`flatMap` produces same result as `map` |
| Regular → forEach | No | Regular steps don't go through the expansion
path |
| **forEach → forEach** | **Yes (fixed)** | Previously only first
expansion was tracked; now all are |

## Tests added

- **Unit test** (`topological_sort_service_test.ts`): Verifies the sort
service correctly levels forEach-expanded nodes where each expansion
depends on all expansions of its predecessor
- **Integration test** (`foreach_test.ts`): End-to-end test with two
forEach steps (deploy + smoke-test) over 3 environments, verifying all 6
expanded steps are created and succeed with correct dependency ordering

## Test plan

- [x] `deno check` — type checking passes
- [x] `deno lint` — no issues
- [x] `deno fmt` — formatted
- [x] `deno run test` — all 1713 tests pass
- [x] `deno run compile` — compiles successfully

v20260216.204244.0-sha.fbac7c1c

Toggle v20260216.204244.0-sha.fbac7c1c's commit message

Verified

This commit was created on GitHub.com and signed with GitHub’s verified signature.
fix: Resolve TOCTOU race condition in data version allocation (#359)

## Summary

Fixes #327 — TOCTOU race condition in `unified_data_repository.ts` where
concurrent `allocateVersion()` and `save()` calls could compute the same
version number, causing silent data loss.

## Reasoning

Both `allocateVersion()` and `save()` followed a read-then-create
pattern: read existing version directories, compute `max + 1`, then
create the directory with `ensureDir()`. Because `ensureDir()` uses
`recursive: true`, it silently succeeds even if the directory already
exists — so two concurrent calls that both compute version N would both
"succeed", with the last writer silently overwriting the first.

This is a real risk because workflow steps execute in parallel via
`Promise.allSettled()` and jobs execute in parallel via `Promise.all()`.

The fix uses `Deno.mkdir()` **without** `recursive: true` as an atomic
claim mechanism. `mkdir` is an atomic kernel-level operation — it either
creates the directory or throws `AlreadyExists`. On collision, we
increment the version and retry. This avoids introducing any locking
infrastructure; the directory creation itself is the lock.

## Plan vs Implementation

| Plan Step | Status | Notes |
|---|---|---|
| **Step 1:** Add failing concurrent tests | Done | Two tests:
concurrent `allocateVersion()` (10 parallel calls, assert unique
versions) and concurrent `save()` (10 parallel calls with distinct
content, assert unique versions + content preserved). Uses
`Deno.makeTempDir()`, `Data.create()`, `Promise.all()`. |
| **Step 2:** Add `atomicAllocateVersionDir()` private method | Done |
Added after `getDataNameDir()`. Ensures parent dir with `recursive:
true`, reads starting version via `listVersions()`, loops with
non-recursive `Deno.mkdir()`, catches `AlreadyExists` to retry with
incremented version, rethrows other errors, caps at 100 retries. |
| **Step 3:** Update `allocateVersion()` | Done | Replaced racy
`listVersions` + `ensureDir` block with `atomicAllocateVersionDir()`
call. Ownership validation unchanged. |
| **Step 4:** Update `save()` | Done | Same replacement. Metadata write,
content write, symlink update all unchanged. |
| **Step 5:** Run tests and confirm pass | Done | All 1713 tests pass
including the 2 new concurrent tests. |
| **Step 6:** Remove unused `ensureDir` import | Done | `import {
ensureDir } from "@std/fs"` removed — no other usages in the file. |

No deviations from the plan. All steps implemented exactly as specified.

## Files Changed

- `src/infrastructure/persistence/unified_data_repository.ts` — core fix
(atomic version allocation)
- `src/infrastructure/persistence/unified_data_repository_test.ts` —
concurrent race condition tests

## Test plan

- [x] `deno check` — type checking passes
- [x] `deno lint` — linting passes
- [x] `deno fmt` — formatting passes
- [x] `deno run test` — all 1713 tests pass
- [x] `deno run compile` — binary compiles successfully

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-authored-by: Magistr <2269529+umag@users.noreply.github.com>

v20260216.200049.0-sha.d0571c14

Toggle v20260216.200049.0-sha.d0571c14's commit message

Verified

This commit was created on GitHub.com and signed with GitHub’s verified signature.
feat: Add multi-tool support (Claude, Cursor, OpenCode, Codex) (#358)

## Summary

Closes #355

Adds `--tool` flag to `swamp repo init` and `swamp repo upgrade` so
users of Cursor, OpenCode, and Codex get first-class support alongside
Claude Code. Default remains `claude` for full backward compatibility.

- `swamp repo init --tool cursor` creates `.cursor/skills/` +
`.cursor/rules/swamp.mdc`
- `swamp repo init --tool opencode` creates `.agents/skills/` +
`AGENTS.md`
- `swamp repo init --tool codex` creates `.agents/skills/` + `AGENTS.md`
- `swamp repo init` (default) behaves exactly as before
- `swamp repo upgrade` reads the stored tool from `.swamp.yaml` marker
- `swamp repo upgrade --tool cursor` switches an existing repo to a
different tool

## What Each Tool Gets

| Artifact | Claude Code | Cursor | OpenCode | Codex |
|----------|------------|--------|----------|-------|
| Skills dir | `.claude/skills/` | `.cursor/skills/` | `.agents/skills/`
| `.agents/skills/` |
| Instructions file | `CLAUDE.md` | `.cursor/rules/swamp.mdc` |
`AGENTS.md` | `AGENTS.md` |
| Permissions | `.claude/settings.local.json` | N/A | N/A | N/A |
| Gitignore entry | `.claude/` | `.cursor/skills/` | `.agents/skills/` |
`.agents/skills/` |

## Files Changed

| File | Change |
|------|--------|
| `src/infrastructure/persistence/repo_marker_repository.ts` | Added
`AiTool` type, `tool?` field to `RepoMarkerData` |
| `src/domain/repo/repo_service.ts` | Core multi-tool logic: skill dirs,
instructions files, gitignore, settings gating |
| `src/cli/commands/repo_init.ts` | `--tool` flag on init, upgrade, and
repo commands |
| `src/presentation/output/repo_output.ts` | Renamed fields, added
`tool` to output data |
| `src/domain/repo/repo_service_test.ts` | Updated existing tests + 10
new multi-tool tests (33 total) |

## Plan vs Implementation

The implementation follows the plan from #355 with one deliberate
deviation:

| Plan Item | Status | Notes |
|-----------|--------|-------|
| 1. `AiTool` type + `SKILL_DIRS` | Done | Type defined in marker repo,
re-exported from service |
| 2. `--tool` flag on CLI | Done | Uses Cliffy `EnumType` for validation
|
| 3. Updated options/result types | Done | Renamed `claudeMdCreated` →
`instructionsFileCreated`, `claudeSettingsCreated` → `settingsCreated` |
| 4. Tool-aware `init()` | Done | Skills, instructions, settings,
gitignore all tool-aware |
| 5. Tool-specific instructions | Done | Cursor gets MDC frontmatter,
opencode/codex get AGENTS.md |
| 6. Tool-aware gitignore | Done | Each tool gets appropriate entries +
common swamp entries |
| 7. No changes to `SkillAssets` | Done | Already generic |
| 8. Updated output types | Done | Renamed fields, added tool, updated
log rendering |
| 9. Tool in `.swamp.yaml` marker | Done | Stored on init, read on
upgrade |
| 10. Tool-aware `upgrade()` | Done | CLI override > marker > "claude"
fallback works. Old tool directories are intentionally preserved (see
below) |
| 11. Updated tests | Done | 33 tests, 10 new multi-tool tests |
| 12. No compile script changes | Done | N/A |

### Why old tool directories are preserved on switch

The plan suggested cleaning up the old tool's skill directory when
switching tools via `swamp repo upgrade --tool <new>`. This was
intentionally **not** implemented because users may want multiple tools
active in the same repo — e.g. running both Claude Code and Cursor side
by side. Each tool's skills directory is independent and gitignored, so
having multiple coexist is harmless and supports real multi-tool
workflows.

## Test plan

- [x] `deno check` passes
- [x] `deno lint` passes
- [x] `deno fmt` passes
- [x] `deno run test` — all 1711 tests pass (33 repo service tests, 10
new)
- [x] Manual: `swamp repo init` (default claude behavior unchanged)
- [x] Manual: `swamp repo init --tool cursor`
- [x] Manual: `swamp repo init --tool opencode`
- [x] Manual: `swamp repo init --tool codex`
- [x] Manual: `swamp repo upgrade` (reads tool from marker)
- [x] Manual: `swamp repo upgrade --tool cursor` (switches tool)

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-authored-by: Walter Heck <walterheck@helixiora.com>

v20260216.180446.0-sha.5b90276b

Toggle v20260216.180446.0-sha.5b90276b's commit message

Verified

This commit was created on GitHub.com and signed with GitHub’s verified signature.
Chore: Update swamp repo init output (#357)

We want to make sure people know what to do after install and give them
a welcome after they start a repo

v20260216.165708.0-sha.a1ac98a4

Toggle v20260216.165708.0-sha.a1ac98a4's commit message

Verified

This commit was created on GitHub.com and signed with GitHub’s verified signature.
fix: Update Discord invite link to swamp-club (#356)

## Summary

- Update broken Discord invite link from `system-init` to `swamp-club`
in README.md and CODE_OF_CONDUCT.md

Closes #354

Co-authored-by: Walter Heck <walterheck@helixiora.com>

🤖 Generated with [Claude Code](https://claude.com/claude-code)

---------

Co-authored-by: Walter Heck <walterheck@helixiora.com>
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>

v20260214.231009.0-sha.68170697

Toggle v20260214.231009.0-sha.68170697's commit message

Verified

This commit was created on GitHub.com and signed with GitHub’s verified signature.
fix: GC scanning now handles nested/namespaced model types (#352)

## Summary

Fixes #330.

Garbage collection (`swamp data gc`) was silently failing to find and
clean up
expired data for any model with a nested or namespaced type — including
built-in
types like `command/shell` and user-defined types like `aws/ec2/vpc` or
`@docker/host`.

The root cause was that `DefaultDataLifecycleService.findExpiredData()`
and the
version GC loop in `deleteExpiredData()` both scanned `.swamp/data/`
using a
flat 3-level `readDir()` loop (`type → modelId → dataName`). For a type
like
`command/shell`, the scanner would read `command` as the full type and
`shell`
as a model ID, producing completely wrong lookups. Expired data for all
nested/namespaced types was never garbage collected.

## What changed

- **`findExpiredData()`** — replaced the manual 3-level directory scan
with a
call to the repository's `findAllGlobal()` method, which already
correctly
  handles arbitrary nesting depth using recursive directory walking with
  `isModelIdDirectory()` detection.
- **`deleteExpiredData()` version GC loop** — replaced the second manual
directory scan with unique type+modelId pairs derived from
`findAllGlobal()`.
- **Removed `repoDir` constructor parameter** from
`DefaultDataLifecycleService`
  since the service no longer does its own filesystem path construction.
- **Removed unused imports** (`join`, `SWAMP_SUBDIRS`, `swampPath`) and
changed
  `ModelType` to a type-only import.
- **Added tests** for `findExpiredData` covering nested types
(`aws/ec2/vpc`,
  `command/shell`), non-expired data, and empty repositories.

## What this means for users

If you use models with nested types (like `command/shell`, custom
extension
models under namespaces like `aws/ec2/vpc`, or scoped types like
`@docker/host`), `swamp data gc` will now correctly identify and clean
up
expired data and old versions for those models. Previously this data
would
accumulate indefinitely regardless of lifetime or garbage collection
policies.

## Files changed

| File | Change |
|------|--------|
| `src/domain/data/data_lifecycle_service.ts` | Replace manual dir
scanning with `findAllGlobal()`, remove `repoDir` param |
| `src/cli/commands/data_gc.ts` | Remove `repoDir` from constructor call
|
| `src/domain/data/data_lifecycle_service_test.ts` | Add `findAllGlobal`
to mock, add nested type tests |

## Test plan

- [x] `deno check` — type checking passes
- [x] `deno lint` — no lint errors
- [x] `deno fmt` — formatting correct
- [x] `deno run test` — all 1701 tests pass
- [x] `deno run compile` — binary compiles successfully

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-authored-by: Magistr <2269529+umag@users.noreply.github.com>