Skip to content

maurorisonho/gocode

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

2 Commits
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

GoCode

An AI coding agent built for the terminal - A Go implementation inspired by OpenCode.

πŸš€ Features

  • Native TUI: A responsive, native terminal UI built with Bubble Tea
  • LSP Enabled: Automatically loads the right language servers for the LLM
  • Multi-session: Start multiple agents in parallel on the same project
  • Multi-Provider Support: Works with 11+ LLM providers:
    • Anthropic (Claude 3.5 Sonnet/Haiku)
    • OpenAI (GPT-4o, GPT-4o mini)
    • Google Gemini (1.5 Pro/Flash, 2.0)
    • Mistral AI (Large, Small, Codestral)
    • Cohere (Command R+/R)
    • Perplexity (Sonar Large/Small Online)
    • DeepSeek (Chat, Coder)
    • xAI (Grok Beta)
    • Groq (Llama 3.3 70B - Ultra-fast inference ⚑)
    • Together AI (Llama 3.1 405B, Qwen 2.5 Coder, DeepSeek V3)
    • Cerebras (Llama 3.3 70B on Cerebras WSE)
  • Local Models Support 🏠:
    • Ollama (localhost:11434) - 100% free & private
    • LM Studio (localhost:1234) - GUI-based local inference
    • Any OpenAI-compatible endpoint
  • Agent System: Customizable agents with permission management
  • 7 Built-in Tools: read, write, edit, bash, list, grep, glob
  • HTTP API Server: RESTful API for integration with other tools

⚑ Why gocode?

vs OpenCode

Feature gocode OpenCode
Language Go (single binary) TypeScript (Node.js)
Startup Time < 50ms ~500ms
Memory Usage ~20MB ~100MB
Providers 11+ (including Groq, Together, Cerebras) 20+ via AI SDK
Local Models βœ… Built-in (Ollama, LM Studio) βœ… Via config
Dependencies Zero runtime deps Node.js required
Configuration YAML native JSON

Unique Features

  • πŸš€ Ultra-fast providers: Groq (300+ tokens/s), Cerebras (hardware acceleration)
  • 🏠 Privacy-first: Run 100% locally with Ollama - no data leaves your machine
  • ⚑ Instant startup: Go binary starts in milliseconds
  • πŸ“¦ Single binary: No npm install, no node_modules
  • 🎯 Focus on code: Terminal-first design keeps you in flow
  • πŸ”§ Extensible: Add custom providers via simple config

πŸ“‹ Requirements

  • Go 1.25.4 or higher
  • Docker (optional, for containerized development)
  • Git

πŸ“¦ Installation

go install github.com/maurorisonho/gocode/cmd/gocode@latest

Or build from source:

git clone https://github.com/maurorisonho/gocode
cd gocode
go build -o gocode ./cmd/gocode

Quick Start

Option 1: Cloud Provider (Fast)

# Set up Groq (ultra-fast, cheap)
export GROQ_API_KEY="gsk_..."

# Start TUI
gocode

# Or run single prompt
gocode run -p groq "Create a REST API in Go"

Option 2: Local Model (100% Private & Free)

# Install Ollama
curl -fsSL https://ollama.ai/install.sh | sh

# Pull a model
ollama pull llama3.2

# Start Ollama server
ollama serve

# Configure gocode
cat > ~/.gocode/config.yaml << EOF
llm:
  default_provider: ollama
  default_model: llama3.2

providers:
  ollama:
    base_url: http://localhost:11434/v1
    enabled: true
EOF

# Use it!
gocode

Option 3: Mix Both

# Fast agent with Groq
gocode run -p groq -m llama-3.3-70b-versatile "quick task"

# Complex task with Claude
gocode run -p anthropic -m claude-3-5-sonnet-20241022 "refactor code"

# Private task with local model
gocode run -p ollama -m llama3.2 "analyze sensitive code"

🎯 Available Providers

Ultra-Fast Inference

Provider Best Model Speed Cost
Groq llama-3.3-70b-versatile ⚑⚑⚑⚑⚑ $0.59/1M
Cerebras llama-3.3-70b ⚑⚑⚑⚑⚑ $0.60/1M

Best Quality

Provider Best Model Quality Cost
Anthropic claude-3-5-sonnet ⭐⭐⭐⭐⭐ $3/1M
OpenAI gpt-4o ⭐⭐⭐⭐⭐ $2.5/1M
Together AI llama-3.1-405b ⭐⭐⭐⭐ $3.5/1M

Local & Free

Provider Setup Privacy Cost
Ollama Easy 100% Free
LM Studio GUI-based 100% Free

See docs/PROVIDERS.md for complete setup guide.

Run Mode

Execute a prompt directly without the TUI:

gocode run "Explain how to use context in Go"

# With specific provider and model
gocode run -p groq -m llama-3.3-70b-versatile "Fast response"
gocode run -p ollama -m llama3.2 "Private task"

Server Mode

Start as a background server:

gocode serve --port 4096

Configuration

GoCode looks for configuration in .gocode/ directory in your project root or in ~/.config/gocode/.

Example Configuration

Create .gocode/config.yaml:

llm:
  default_provider: groq # Ultra-fast!
  default_model: llama-3.3-70b-versatile

providers:
  # Fast inference
  groq:
    api_key: ${GROQ_API_KEY}
    enabled: true

  # Best quality
  anthropic:
    api_key: ${ANTHROPIC_API_KEY}
    default_model: claude-3-5-sonnet-20241022
  openai:
    api_key: ${OPENAI_API_KEY}
    default_model: gpt-4o
  gemini:
    api_key: ${GEMINI_API_KEY}
    default_model: gemini-1.5-pro

agents:
  build:
    description: "Full development agent with all tools"
    tools:
      bash: allow
      write: allow
      edit: allow

  plan:
    description: "Planning agent without file modifications"
    tools:
      bash: ask
      write: deny
      edit: deny

permissions:
  edit: ask
  bash:
    "*": ask
    "git *": allow
    "ls *": allow

About

An AI coding agent built for the terminal - A Go implementation inspired by OpenCode.

Topics

Resources

License

Contributing

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published