An AI coding agent built for the terminal - A Go implementation inspired by OpenCode.
- Native TUI: A responsive, native terminal UI built with Bubble Tea
- LSP Enabled: Automatically loads the right language servers for the LLM
- Multi-session: Start multiple agents in parallel on the same project
- Multi-Provider Support: Works with 11+ LLM providers:
- Anthropic (Claude 3.5 Sonnet/Haiku)
- OpenAI (GPT-4o, GPT-4o mini)
- Google Gemini (1.5 Pro/Flash, 2.0)
- Mistral AI (Large, Small, Codestral)
- Cohere (Command R+/R)
- Perplexity (Sonar Large/Small Online)
- DeepSeek (Chat, Coder)
- xAI (Grok Beta)
- Groq (Llama 3.3 70B - Ultra-fast inference β‘)
- Together AI (Llama 3.1 405B, Qwen 2.5 Coder, DeepSeek V3)
- Cerebras (Llama 3.3 70B on Cerebras WSE)
- Local Models Support π :
- Ollama (localhost:11434) - 100% free & private
- LM Studio (localhost:1234) - GUI-based local inference
- Any OpenAI-compatible endpoint
- Agent System: Customizable agents with permission management
- 7 Built-in Tools: read, write, edit, bash, list, grep, glob
- HTTP API Server: RESTful API for integration with other tools
| Feature | gocode | OpenCode |
|---|---|---|
| Language | Go (single binary) | TypeScript (Node.js) |
| Startup Time | < 50ms | ~500ms |
| Memory Usage | ~20MB | ~100MB |
| Providers | 11+ (including Groq, Together, Cerebras) | 20+ via AI SDK |
| Local Models | β Built-in (Ollama, LM Studio) | β Via config |
| Dependencies | Zero runtime deps | Node.js required |
| Configuration | YAML native | JSON |
- π Ultra-fast providers: Groq (300+ tokens/s), Cerebras (hardware acceleration)
- π Privacy-first: Run 100% locally with Ollama - no data leaves your machine
- β‘ Instant startup: Go binary starts in milliseconds
- π¦ Single binary: No npm install, no node_modules
- π― Focus on code: Terminal-first design keeps you in flow
- π§ Extensible: Add custom providers via simple config
- Go 1.25.4 or higher
- Docker (optional, for containerized development)
- Git
go install github.com/maurorisonho/gocode/cmd/gocode@latestOr build from source:
git clone https://github.com/maurorisonho/gocode
cd gocode
go build -o gocode ./cmd/gocode# Set up Groq (ultra-fast, cheap)
export GROQ_API_KEY="gsk_..."
# Start TUI
gocode
# Or run single prompt
gocode run -p groq "Create a REST API in Go"# Install Ollama
curl -fsSL https://ollama.ai/install.sh | sh
# Pull a model
ollama pull llama3.2
# Start Ollama server
ollama serve
# Configure gocode
cat > ~/.gocode/config.yaml << EOF
llm:
default_provider: ollama
default_model: llama3.2
providers:
ollama:
base_url: http://localhost:11434/v1
enabled: true
EOF
# Use it!
gocode# Fast agent with Groq
gocode run -p groq -m llama-3.3-70b-versatile "quick task"
# Complex task with Claude
gocode run -p anthropic -m claude-3-5-sonnet-20241022 "refactor code"
# Private task with local model
gocode run -p ollama -m llama3.2 "analyze sensitive code"| Provider | Best Model | Speed | Cost |
|---|---|---|---|
| Groq | llama-3.3-70b-versatile | β‘β‘β‘β‘β‘ | $0.59/1M |
| Cerebras | llama-3.3-70b | β‘β‘β‘β‘β‘ | $0.60/1M |
| Provider | Best Model | Quality | Cost |
|---|---|---|---|
| Anthropic | claude-3-5-sonnet | βββββ | $3/1M |
| OpenAI | gpt-4o | βββββ | $2.5/1M |
| Together AI | llama-3.1-405b | ββββ | $3.5/1M |
| Provider | Setup | Privacy | Cost |
|---|---|---|---|
| Ollama | Easy | 100% | Free |
| LM Studio | GUI-based | 100% | Free |
See docs/PROVIDERS.md for complete setup guide.
Execute a prompt directly without the TUI:
gocode run "Explain how to use context in Go"
# With specific provider and model
gocode run -p groq -m llama-3.3-70b-versatile "Fast response"
gocode run -p ollama -m llama3.2 "Private task"Start as a background server:
gocode serve --port 4096GoCode looks for configuration in .gocode/ directory in your project root or in ~/.config/gocode/.
Create .gocode/config.yaml:
llm:
default_provider: groq # Ultra-fast!
default_model: llama-3.3-70b-versatile
providers:
# Fast inference
groq:
api_key: ${GROQ_API_KEY}
enabled: true
# Best quality
anthropic:
api_key: ${ANTHROPIC_API_KEY}
default_model: claude-3-5-sonnet-20241022
openai:
api_key: ${OPENAI_API_KEY}
default_model: gpt-4o
gemini:
api_key: ${GEMINI_API_KEY}
default_model: gemini-1.5-pro
agents:
build:
description: "Full development agent with all tools"
tools:
bash: allow
write: allow
edit: allow
plan:
description: "Planning agent without file modifications"
tools:
bash: ask
write: deny
edit: deny
permissions:
edit: ask
bash:
"*": ask
"git *": allow
"ls *": allow