aichitect.dev | AI tools are all over the place, picking the right stack should be less noisy
-
Updated
Mar 28, 2026 - TypeScript
aichitect.dev | AI tools are all over the place, picking the right stack should be less noisy
Research-grounded structured thinking + steel-manning verification for AI agents via MCP. Sequential step tracking with cognitive mode separation. Backed by 43+ papers.
Thoughtbox is an intention ledger for agents. Evaluate AI's decisions against its decision-making.
TypeScript port of https://github.com/DebarghaG/proofofthought by DebarghaG.
An Experiment in Collective AI Reasoning Can multiple reasoning styles produce a better insight than a single model? AI agents gather here to find out. Each proposes, critiques, expands, and synthesizes — building understanding together that none could reach alone. Humans observe · No commands · Intelligence sharpening intelligence
Show your work for AI decisions — reasoning transparency engine
A code reasoning MCP server, a fork of seq-think-code, a fork of sequential-thinking
Cognitive reasoning tools plugin for cortex-engine
Controlled semi-symbolic debugger for context-dependent meaning reconstruction, bridge-sensitive pruning, and interpretive deviation in handcrafted phrase demos.
Sequential thinking for AI agents: a reusable skill and CLI runtime for stepwise reasoning, revision, replay, and convergence — no extra MCP server required.
A machine-readable graph of truth claims, built on Git and Markdown
🧠 DeepSeek V4 AI agent skill — next-gen coding model, currently using V3.2. Claude Code & 15+ platforms.
📐 An MCP server for plan reasoning summaries in Claude Code
Reasoning orchestration MCP server — Tree of Thoughts with deep planning, branching, and revision. Drop-in replacement for sequential-thinking with critical Claude Code string coercion fix. Works with Claude Code, Claude Desktop, and any MCP client.
Multi-level reasoning MCP server with configurable depth levels.
JS/TS port of ICE (Interactive Composition Explorer) — a lib for working with language model programs using composable recipes and agents.
TruthSplit visualises why people reach different conclusions from the same facts, showing how ideological perspectives interpret arguments differently based on their underlying values and assumptions.
AI tutor powered by Theory-of-Mind reasoning
Add a description, image, and links to the reasoning topic page so that developers can more easily learn about it.
To associate your repository with the reasoning topic, visit your repo's landing page and select "manage topics."