Query multiple LLM CLIs in parallel and compare their responses.
pip install midtry
midtry "Explain the tradeoffs between REST and GraphQL"MidTry spawns parallel calls to available AI CLIs (Claude, Gemini, Codex, etc.), each with a different reasoning perspective, and returns all responses for comparison.
$ midtry "Debug this function"
╭─────────────────────── Task ───────────────────────╮
│ Debug this function │
╰────────────────────────────────────────────────────╯
CLIs available: claude, gemini, codex, qwen
Mode: ORDERED
Timeout: 120s per call
claude (conservative): done (12.3s)
gemini (analytical): done (18.1s)
codex (creative): done (15.7s)
qwen (adversarial): done (22.4s)
=== RESPONSES ===
--- Response 1: Conservative (claude) ---
[methodical step-by-step analysis]
--- Response 2: Analytical (gemini) ---
[edge case focused breakdown]
--- Response 3: Creative (codex) ---
[alternative framing]
--- Response 4: Adversarial (qwen) ---
[challenges assumptions]
=== END RESPONSES ===
pip install midtryRequires at least one supported CLI installed:
| CLI | Install |
|---|---|
| claude | Claude Code |
| gemini | Gemini CLI |
| codex | OpenAI Codex |
| qwen | Qwen CLI |
| opencode | OpenCode |
| copilot | GitHub Copilot CLI |
Check what's available:
midtry detect# Basic usage
midtry "Your question or task"
# Select specific models
midtry --models claude,gemini "Question"
# Quick mode (2 models only)
midtry --quick "Question"
# Random perspective assignment
midtry --random "Question"
# Demo mode (no API calls)
midtry demoimport midtry
result = midtry.solve("Optimize this SQL query", clis=["claude", "gemini"])
for r in result.results:
print(f"{r.cli} ({r.perspective.value}): {r.output[:100]}...")Each CLI receives the task with a different framing:
| Perspective | Prompt style |
|---|---|
| Conservative | Careful, methodical, prioritizes correctness |
| Analytical | Systematic, considers edge cases |
| Creative | Alternative approaches, simpler reframings |
| Adversarial | Challenges obvious answers, looks for tricks |
Create config.toml in your working directory:
[midtry]
timeout_seconds = 120
max_parallel = 4
mode = "random" # or "ordered"
[perspectives]
sources = ["claude", "gemini", "codex", "qwen"]
[perspectives.prompts]
conservative = "Solve carefully. Double-check each step. Task: {task}"
analytical = "Break down systematically. Consider edge cases. Task: {task}"
creative = "Consider unconventional approaches. Task: {task}"
adversarial = "Challenge the obvious answer. Task: {task}"Or use environment variables:
MIDTRY_TIMEOUT=60 midtry "Question"
MIDTRY_CLIS="claude gemini" midtry "Question"- Requires external CLI tools installed and authenticated
- Latency scales with slowest model
- No automatic synthesis of responses (you aggregate manually)
- Quality depends on underlying models
- Complex problems where multiple viewpoints help
- Code review, debugging, architectural decisions
- When you want to see how different models approach a problem
- Building confidence through diverse perspectives
- Simple factual questions
- Time-sensitive queries
- Problems with obvious single solutions
MidTry applies ideas from DeepSeek-R1 (structured reasoning with verification) and mHC (multi-stream exploration) at inference time. The diversity comes from querying different models, not from training-time optimization.
MIT
Built with multi-agent consensus.