Skip to content

zavora-ai/adk-ralph

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

13 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

adk-ralph

Ralph is a multi-agent autonomous development system that transforms a user's idea into a fully implemented project. It uses three specialized agents working in sequence:

  1. PRD Agent — Creates structured requirements from user prompts
  2. Architect Agent — Creates system design and task breakdown from PRD
  3. Ralph Loop Agent — Iteratively implements tasks until completion

Features

  • Multi-Agent Pipeline: Three specialized agents for requirements, design, and implementation
  • Interactive Chat Mode: REPL-based conversational interface with session persistence
  • Priority-Based Task Selection: Implements highest priority tasks first with dependency checking
  • Progress Tracking: Append-only progress log captures learnings and gotchas
  • Test-Before-Commit: Only commits code that passes tests
  • Multi-Provider LLM Support: Gemini, OpenAI, Anthropic, DeepSeek, Groq, Ollama, Fireworks, Together, Mistral, Perplexity, Cerebras, SambaNova, Bedrock, Azure AI
  • Multi-Language Support: Rust, Python, TypeScript, Go, Java
  • Telemetry Integration: Full observability with OpenTelemetry

Architecture

Ralph follows an iterative, file-driven development pattern where agents learn from their previous work through persistent files (progress.json, tasks.json) rather than in-memory state.

flowchart TB
    subgraph Input
        UP[User Prompt]
    end

    subgraph "Phase 1: Requirements"
        PA[PRD Agent<br/>gemini-3.1-pro-preview]
        PRD[prd.md]
        UP --> PA
        PA --> PRD
    end

    subgraph "Phase 2: Architecture"
        AA[Architect Agent<br/>gemini-3-pro-preview]
        DES[design.md]
        TSK[tasks.json]
        PRD --> AA
        AA --> DES
        AA --> TSK
    end

    subgraph "Phase 3: Implementation"
        RL[Ralph Loop Agent<br/>gemini-2.5-flash]
        PROG[progress.json]
        CODE[Source Files]
        TEST[Test Files]
        GIT[Git Commits]

        TSK --> RL
        DES --> RL
        PROG --> RL
        RL --> CODE
        RL --> TEST
        RL --> GIT
        RL --> PROG
        RL -->|Update Status| TSK
    end

    subgraph Output
        DONE[Completion Promise]
        RL -->|All Tasks Done| DONE
    end
Loading

Three-Agent Pipeline

Phase Agent Input Output Purpose
1. Requirements PRD Agent User prompt prd.md Generate structured requirements with user stories
2. Design Architect Agent prd.md design.md, tasks.json Create architecture and task breakdown
3. Implementation Ralph Loop Agent design.md, tasks.json, progress.json Source code, tests, commits Iteratively implement all tasks

Default Agent Model Strategy

Agent Default Provider Default Model Thinking Purpose
PRD Agent Gemini gemini-3.1-pro-preview Configurable Deep requirements analysis
Architect Agent Gemini gemini-3-pro-preview Configurable Complex design decisions
Ralph Loop Agent Gemini gemini-2.5-flash Disabled Fast implementation iterations

Note: With adk-rust v0.5.0, all agents benefit from tool timeout, retry budgets, and circuit breakers. The loop agent also uses LlmEventSummarizer for automatic context compaction during long-running sessions.

All agents are fully configurable — see Configuration for per-agent provider and model overrides.

ADK-Rust Integration

Ralph is built on the adk-rust framework, using its unified crate with feature flags:

  • Core: Agent, Content, Part traits for agent abstraction
  • Models: Provider clients for Gemini, OpenAI, Anthropic, DeepSeek, Groq, Ollama, Fireworks, Together, Mistral, Perplexity, Cerebras, SambaNova, Bedrock, Azure AI
  • Runner: Agent execution with session management and context compaction
  • Sessions: InMemorySessionService for session state
  • Telemetry: OpenTelemetry 0.31 integration for tracing and metrics
  • Agents: LlmAgent, LoopAgent, SequentialAgent, ParallelAgent, LlmEventSummarizer
  • Tools: FunctionTool, ExitLoopTool, McpToolset, BasicToolset, FilteredToolset
  • Resilience: Tool timeout, retry budgets, circuit breakers per agent

Tools

Tool Purpose Operations
FileTool File system operations read, write, append, list, delete
GitTool Version control status, add, commit, diff, log, branch
TestTool Test execution run, check, coverage (multi-language)
ProgressTool Progress tracking read, append, summary
TaskTool Task management list, get_next, update_status, complete
AddFeatureTool Feature additions append to PRD and regenerate tasks
RunPipelineTool Pipeline execution run full or partial pipeline
RunProjectTool Project execution build, run, test with language detection
GetTimeTool Time utilities current time for progress entries
WebSearchTool Web search search for documentation and solutions

Installation

Add to your Cargo.toml:

[dependencies]
adk-ralph = { path = "../adk-ralph" }

Or clone and build directly:

git clone https://github.com/zavora-ai/adk-ralph.git
cd adk-ralph
cargo build --release

Quick Start

# 1. Copy the example configuration
cp .env.example .env

# 2. Set your API key (pick your provider)
# For Gemini (default):
export GOOGLE_API_KEY=your-key
# Or Anthropic:
export ANTHROPIC_API_KEY=your-key
# Or OpenAI:
export OPENAI_API_KEY=your-key

# 3. Run Ralph with a project idea
cargo run -- "Create a CLI calculator in Rust"

# 4. Or start interactive chat mode
cargo run -- chat

CLI Usage

ralph <prompt>                    # Run full pipeline with a prompt
ralph run <prompt>                # Same as above (explicit)
ralph resume --phase design       # Resume from a specific phase
ralph chat                        # Start interactive REPL
ralph chat --resume               # Resume previous chat session
ralph chat --auto-approve         # Skip change confirmations
ralph status                      # Show pipeline status and artifacts
ralph config                      # Validate current configuration

Global Options

ralph -d verbose <prompt>         # Verbose output with tool calls
ralph -d debug <prompt>           # Full debug output
ralph -p /path/to/project <prompt> # Override project output directory

Configuration

Ralph is configured via environment variables with sensible defaults. See .env.example for all options.

API Keys (Required)

Set at least one API key based on your chosen provider:

Variable Description
GOOGLE_API_KEY Google Gemini API key (default provider)
ANTHROPIC_API_KEY Anthropic API key for Claude models
OPENAI_API_KEY OpenAI API key for GPT models
DEEPSEEK_API_KEY DeepSeek API key
GROQ_API_KEY Groq API key for ultra-fast inference
FIREWORKS_API_KEY Fireworks AI API key
TOGETHER_API_KEY Together AI API key
MISTRAL_API_KEY Mistral AI API key
PERPLEXITY_API_KEY Perplexity API key
CEREBRAS_API_KEY Cerebras API key
SAMBANOVA_API_KEY SambaNova API key
AZURE_AI_API_KEY Azure AI Inference API key
AWS_ACCESS_KEY_ID / AWS_SECRET_ACCESS_KEY Amazon Bedrock (IAM credentials)

Per-Agent Model Configuration

Each agent can use a different model and provider:

Variable Default Description
RALPH_PRD_PROVIDER gemini Provider for PRD Agent
RALPH_PRD_MODEL gemini-3.1-pro-preview Model for PRD Agent
RALPH_PRD_THINKING false Enable thinking mode
RALPH_ARCHITECT_PROVIDER gemini Provider for Architect Agent
RALPH_ARCHITECT_MODEL gemini-3-pro-preview Model for Architect Agent
RALPH_ARCHITECT_THINKING false Enable thinking mode
RALPH_LOOP_PROVIDER gemini Provider for Ralph Loop Agent
RALPH_LOOP_MODEL gemini-2.5-flash Model for Ralph Loop Agent
RALPH_LOOP_THINKING false Enable thinking mode

Supported Providers: gemini, openai, anthropic, deepseek, groq, ollama, fireworks, together, mistral, perplexity, cerebras, sambanova, bedrock, azure-ai

Current Model Names (Apr 2026)

Provider Models
Gemini gemini-3.1-pro-preview, gemini-3-pro-preview, gemini-3-flash-preview, gemini-2.5-pro, gemini-2.5-flash, gemini-2.5-flash-lite
Anthropic claude-opus-4-6, claude-sonnet-4-6, claude-haiku-4-5-20251001, claude-sonnet-4-5-20250929
OpenAI gpt-5.2, gpt-5.1, gpt-5, gpt-5-mini, gpt-5-nano, gpt-4.1, o3, o4-mini
DeepSeek deepseek-r1-0528, deepseek-r1, deepseek-v3.1, deepseek-chat
Groq llama-4-scout, llama-3.1-70b-versatile, llama-3.1-8b-instant

Execution Settings

Variable Default Range Description
RALPH_MAX_ITERATIONS 50 1–1000 Maximum loop iterations
RALPH_MAX_TASK_RETRIES 3 1–10 Maximum retries for failed tasks
RALPH_DEBUG_LEVEL normal minimal/normal/verbose/debug Output verbosity
RALPH_COMPLETION_PROMISE All tasks completed successfully! Message on completion

File Paths

All paths are relative to RALPH_PROJECT_PATH:

Variable Default Description
RALPH_PROJECT_PATH . Base directory for the project
RALPH_PRD_PATH prd.md Path to PRD file
RALPH_DESIGN_PATH design.md Path to design file
RALPH_TASKS_PATH tasks.json Path to tasks file
RALPH_PROGRESS_PATH progress.json Path to progress file

Telemetry

Variable Default Description
RALPH_TELEMETRY_ENABLED true Enable/disable telemetry
RALPH_SERVICE_NAME ralph Service name for telemetry
RALPH_LOG_LEVEL info Log level: trace, debug, info, warn, error
RALPH_OTLP_ENDPOINT (none) OTLP endpoint for telemetry export

Example Configurations

Minimal (Gemini — default)

export GOOGLE_API_KEY=your-key
cargo run -- "Create a REST API in Rust"

Anthropic with Thinking

export ANTHROPIC_API_KEY=sk-ant-...
export RALPH_PRD_PROVIDER=anthropic
export RALPH_PRD_MODEL=claude-sonnet-4-5-20250929
export RALPH_PRD_THINKING=true
export RALPH_ARCHITECT_PROVIDER=anthropic
export RALPH_ARCHITECT_MODEL=claude-sonnet-4-5-20250929
export RALPH_ARCHITECT_THINKING=true
export RALPH_LOOP_PROVIDER=anthropic
export RALPH_LOOP_MODEL=claude-haiku-4-5-20251001

OpenAI

export OPENAI_API_KEY=sk-...
export RALPH_PRD_PROVIDER=openai
export RALPH_PRD_MODEL=gpt-5
export RALPH_ARCHITECT_PROVIDER=openai
export RALPH_ARCHITECT_MODEL=gpt-5
export RALPH_LOOP_PROVIDER=openai
export RALPH_LOOP_MODEL=gpt-5-mini

Local Ollama (No API costs)

export RALPH_PRD_PROVIDER=ollama
export RALPH_PRD_MODEL=llama3.1:70b
export RALPH_ARCHITECT_PROVIDER=ollama
export RALPH_ARCHITECT_MODEL=llama3.1:70b
export RALPH_LOOP_PROVIDER=ollama
export RALPH_LOOP_MODEL=llama3.1:8b

DeepSeek (Cost-Effective Reasoning)

export DEEPSEEK_API_KEY=your-key
export RALPH_PRD_PROVIDER=deepseek
export RALPH_PRD_MODEL=deepseek-r1
export RALPH_ARCHITECT_PROVIDER=deepseek
export RALPH_ARCHITECT_MODEL=deepseek-r1
export RALPH_LOOP_PROVIDER=deepseek
export RALPH_LOOP_MODEL=deepseek-chat

Groq (Ultra-Fast Inference)

export GROQ_API_KEY=your-key
export RALPH_PRD_PROVIDER=groq
export RALPH_PRD_MODEL=llama-3.1-70b-versatile
export RALPH_ARCHITECT_PROVIDER=groq
export RALPH_ARCHITECT_MODEL=llama-3.1-70b-versatile
export RALPH_LOOP_PROVIDER=groq
export RALPH_LOOP_MODEL=llama-3.1-8b-instant

Programmatic

use adk_ralph::{RalphConfig, AgentModelConfig, ModelConfig};

let config = RalphConfig::builder()
    .agents(AgentModelConfig {
        prd_model: ModelConfig::new("gemini", "gemini-3.1-pro-preview")
            .with_thinking()
            .with_max_tokens(8192),
        architect_model: ModelConfig::new("gemini", "gemini-3-pro-preview")
            .with_thinking()
            .with_max_tokens(8192),
        ralph_model: ModelConfig::new("gemini", "gemini-2.5-flash")
            .with_max_tokens(4096),
    })
    .max_iterations(100)
    .project_path("./my-project")
    .build()?;

Data Models

PRD (prd.md)

Product Requirements Document with user stories and acceptance criteria.

Design (design.md)

System architecture with components, technology stack, and file structure.

Tasks (tasks.json)

Structured task list with priorities, dependencies, and status tracking.

{
  "project": "project-name",
  "language": "rust",
  "tasks": [
    {
      "id": "T-001",
      "title": "Set up project structure",
      "priority": 1,
      "status": "pending",
      "dependencies": []
    }
  ]
}

Progress (progress.json)

Append-only log of completed work, learnings, and gotchas.

{
  "project": "project-name",
  "entries": [
    {
      "task_id": "T-001",
      "approach": "Created standard Rust project layout",
      "learnings": ["Used workspace structure"],
      "gotchas": ["Remember to add crates to workspace"]
    }
  ]
}

Telemetry

Ralph captures telemetry via OpenTelemetry:

  • Spans: ralph.prd_generation, ralph.architect_design, ralph.loop_iteration, ralph.task_execution, ralph.tool_call
  • Metrics: ralph_iterations_total, ralph_tasks_completed, ralph_tasks_failed
# Start Jaeger for local tracing
docker run -d --name jaeger \
  -p 16686:16686 -p 4317:4317 \
  jaegertracing/all-in-one:latest

export RALPH_OTLP_ENDPOINT=http://localhost:4317
# View traces at http://localhost:16686

Example Prompts

Ralph works best with clear, detailed project descriptions:

# CLI tool
cargo run -- "Create a CLI task manager in Rust with clap, JSON storage, and colored output"

# Web API
cargo run -- "Create a REST API for a bookstore in Python using FastAPI with SQLite and JWT auth"

# Library
cargo run -- "Create a Go rate limiting library with token bucket and sliding window algorithms"

# Simple one-liner
cargo run -- "Create a Rust CLI for converting CSV to JSON"

See examples/prompts/ for detailed prompt templates across languages.

Development

# Run all tests
cargo test

# Run with verbose output
cargo run -- -d verbose "Your prompt"

# Validate configuration
cargo run -- config

License

Apache-2.0

About

ADK Ralph is a multi-agent autonomous development system that transforms a user's idea into a fully implemented project. It uses three specialized agents working in sequence:

Resources

Stars

Watchers

Forks

Packages

 
 
 

Contributors

Languages