Orchestrates any LLM, any CLI agent, across TUI, web, and IDE.
Eliminates the stuck problem common in Node.js-based tools at the source, delivering high-level code intelligence and precise editing on the safety foundation of Rust.
🇰🇷 한국어 | 🇨🇳 简体中文 | 🇯🇵 日本語 | 🇩🇪 Deutsch | 🇫🇷 Français | 🇪🇸 Español | 🇧🇷 Português | 🇷🇺 Русский | 🇮🇳 हिन्दी | 🇦🇪 العربية
| Feature | Description |
|---|---|
| Multi-Provider | Connects to various LLM providers with SSE streaming |
| Tree-sitter Repo Map | AST symbol extraction for Rust/Python/TypeScript/Go + PageRank ranking |
| BM25 Code Search | Incremental indexing, code-aware tokenizer, automatic context injection |
| BLAKE3 Incremental Hashing | mtime pre-check + content hash to re-parse only changed files |
| Surgical Edit | Precise string replacement without full file rewrites |
| Agent Skills | 3-tier progressive skill system based on YAML frontmatter |
| Subagent | Parallel subtask execution in isolated context windows |
| Session Persistence | Session restore via --continue / --resume with auto-save |
| Auto Compaction | Relevance-based compression: SimHash deduplication + scored verbatim preservation + structured summary fallback |
| Stuck Prevention | Iteration limits, circuit breakers, Tokio timeouts |
| Themes & Animations | 6 color themes (Dracula, Catppuccin, etc.) + Braille spinner thinking animation |
| Architect Plan Review | User approval required for Architect→Code transition, with plan file export support |
| Parallel/Team Mode | Task split → parallel execution → result merge. Team mode: real-time inter-agent communication + consensus |
| LSP/MCP | 35+ languages, 48+ servers with automatic fallback + MCP tool integration |
| Tool Approval Gate | 3-tier tool execution approval (Yolo/Auto/Manual), switchable at runtime via Shift+Tab |
| RAG Document Search | Alcove local docs + HTTP Bridge external services. Agent autonomously searches via rag_search tool; SharedKnowledge auto-shared in Swarm mode |
| Soul.md | Persistent agent personality system. The agent self-records Identity, Voice, Inner World, and Growth sections to progressively build a unique character. Global/per-agent toggle. One additional API call per task completion (max 512 tokens) |
| IDE Integration | JetBrains native + VSCode extension via ACP server |
- LLM provider API key
brew install epicsagas/tap/colletcargo binstall colletRequires
cargo-binstallinstalled first:cargo install cargo-binstall
Download the latest release for your platform from the Releases page.
collet setup # or use --advanced option for more detailed settings
# TUI
collet
# Headless
collet "hello collet!"| Document | Description |
|---|---|
| docs/user-guide.md | Complete user manual — CLI, TUI, key bindings, slash commands, multi-provider, MCP, Soul.md |
| docs/config.md | config.toml complete reference — providers, models, agents, telemetry |
| CHANGELOG.md | Version history and release notes |
- Phase 1: MVP — API connector, TUI, tool framework, agent loop
- Phase 2: Context persistence, journaling, repo map, smart compaction
- Phase 3: git-patch hunk editing, reasoning preservation, session restore (--continue/--resume)
- Phase 4: Multi-language tree-sitter (Python, TS, Go), PageRank ranking, LSP/MCP
- Phase 5: Agent Skills, Subagent, @mention (file/folder/model), BM25 search
- Phase 6: MCP, LSP, Side-by-side Diff, prompt caching
- Phase 7: 6 color themes, thinking animation, Architect plan review
- Phase 8: Parallel/team mode (multi-agent orchestration, session-level consensus, conflict detection)
- Phase 9: Soul.md persistent personality — per-agent emotion/thought/growth recording, global/per-agent toggle, auto-compaction
- Phase 10: Flock mode completion — Swarm renaming, RoleBased strategy, PlanReviewExecute pipeline, revision loop with consensus voting
- Phase 11–28: RAG document search (Alcove + HTTP Bridge), IDE integration (ACP), Remote gateways (Telegram/Slack/Discord), Web server, Auto-routing, PII filter, Optimizer, Multi-provider wizard
- Phase 29: Agent loop & swarm hardening — zero-drop guarantee, adaptive retry, swarm consensus refinement
- Phase 30: Model optimizer advanced — provider-aware cost/latency scoring, benchmark-driven model ranking
- Phase 31: Web & remote platform upgrade — Web UI dashboard v2, webhook relay, real-time swarm status streaming
This project began as a personal playground to squeeze more out of GLM models. Then one feature led to another — new models, papers, and paradigms kept arriving before the previous ones were cold.
"New models, new papers, new paradigms — faster than any commit log can keep up." — A recurring nightmare.
Current state — Now dogfooding itself via collet, eating its own tail.
Stability: The core agent loop is solid. Parallel/swarm execution and external integrations are still settling.
Apache-2.0