Skip to content

majiayu000/sage

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

37 Commits
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

Sage Agent

Rust License Build Status Version Status

πŸ“¦ ARCHIVED PROJECT This project has been archived and is no longer actively maintained. While the code remains available for reference and learning purposes, no new features will be added and issues will not be addressed. Feel free to fork this repository if you'd like to continue development.

🌐 Language / 语言

English δΈ­ζ–‡


πŸ€– Sage Agent is a powerful LLM-based agent system for general-purpose software engineering tasks, built in Rust with modern async architecture and clean design patterns.

πŸ“¦ Archive Notice

This project has been archived as of July 2025.

Why Archived?

After extensive development and experimentation, we've decided to archive this Rust-based implementation in favor of a TypeScript-based approach. Our analysis revealed that:

  • 🌐 Ecosystem Advantage: TypeScript/Node.js has a significantly richer AI/LLM ecosystem with official SDKs from all major providers
  • ⚑ Development Velocity: Single-language stack dramatically improves development speed and team collaboration
  • πŸ”§ Maintenance Simplicity: Unified toolchain reduces complexity and deployment overhead
  • πŸ“Š Performance Reality: In AI agent scenarios, network I/O (LLM API calls) is the bottleneck, not CPU performance

What's Available

This repository contains a fully functional Rust implementation with:

  • βœ… Complete concurrent tool execution system
  • βœ… Modern terminal UI with React + Ink
  • βœ… Multi-LLM provider support
  • βœ… Comprehensive tool ecosystem
  • βœ… Advanced trajectory recording

For Future Development

If you're interested in continuing this project:

  1. Fork this repository - All code is MIT licensed and ready to use
  2. Check the TypeScript migration insights - See our analysis in the commit history
  3. Consider hybrid approaches - The UI components and architecture patterns are valuable

We believe this codebase serves as an excellent reference implementation for Rust-based AI agents and demonstrates advanced patterns in concurrent tool execution.

πŸ”„ Project Origin

This project is a Rust rewrite of the original Trae Agent by ByteDance. While maintaining the core functionality and philosophy of the original Python-based agent, Sage Agent brings:

  • πŸš€ Performance: Rust's zero-cost abstractions and memory safety
  • ⚑ Concurrency: Modern async/await patterns with Tokio
  • πŸ›‘οΈ Type Safety: Compile-time guarantees and robust error handling
  • πŸ—οΈ Modularity: Clean architecture with well-defined service boundaries

We extend our gratitude to the ByteDance team and the open-source community for creating the foundational Trae Agent project that inspired this implementation.

πŸ“‹ Table of Contents

✨ Features

πŸ€– AI Integration πŸ› οΈ Developer Tools 🎨 User Experience
Multi-LLM Support
(OpenAI, Anthropic, Google)
Rich Tool Ecosystem
(Code editing, Bash, Retrieval)
Interactive CLI
(Animations, Progress indicators)
Smart Context Handling Task Management System Terminal Markdown Rendering
Trajectory Recording SDK Integration Beautiful UI Components

πŸ”₯ Key Highlights

  • 🌐 Multi-LLM Support: Compatible with OpenAI, Anthropic, Google, and other LLM providers
  • πŸ› οΈ Rich Tool Ecosystem: Built-in tools for code editing, bash execution, codebase retrieval, and task management
  • πŸ’» Interactive CLI: Beautiful terminal interface with animations and progress indicators
  • πŸ“¦ SDK Integration: High-level SDK for programmatic usage
  • πŸ“Š Trajectory Recording: Complete execution tracking and replay capabilities
  • πŸ“ Markdown Rendering: Terminal-based markdown display with syntax highlighting
  • πŸ“‹ Task Management: Built-in task planning and progress tracking
  • πŸ—οΈ Clean Architecture: Modular design with clear separation of concerns

πŸ—οΈ Architecture

The project is organized as a Rust workspace with four main crates:

  • sage-core: Core library with agent execution, LLM integration, and tool management
  • sage-cli: Command-line interface with interactive mode and rich UI
  • sage-sdk: High-level SDK for programmatic integration
  • sage-tools: Collection of built-in tools for various tasks

πŸš€ Quick Start

πŸ’‘ TL;DR: cargo install sage-cli && sage - Get started in seconds!

# πŸš€ One-line installation
cargo install --git https://github.com/majiayu000/sage sage-cli

# 🎯 Start interactive mode
sage

# ✨ Or run a specific task
sage run "Create a Python script that calculates fibonacci numbers"

System Requirements

  • Rust: 1.85+ (latest stable recommended)
  • Operating System: Linux, macOS, Windows
  • Memory: Minimum 4GB RAM (8GB+ recommended)
  • API Keys: API keys for your chosen LLM providers

Installation

Method 1: Build from Source

# Clone the repository
git clone https://github.com/majiayu000/sage
cd sage-agent

# Build the project
cargo build --release

# Install the CLI
cargo install --path crates/sage-cli

Method 2: Install via Cargo

# Install from crates.io (if published)
cargo install sage-cli

# Or install from Git repository
cargo install --git https://github.com/majiayu000/sage sage-cli

Verify Installation

# Check version
sage --version

# Show help
sage --help

Configuration

Create a configuration file sage_config.json:

{
  "providers": {
    "openai": {
      "api_key": "${OPENAI_API_KEY}",
      "base_url": "https://api.openai.com/v1"
    }
  },
  "default_provider": "openai",
  "model_parameters": {
    "model": "gpt-4",
    "temperature": 0.7,
    "max_tokens": 4000
  },
  "max_steps": 20,
  "working_directory": "."
}

Basic Usage

CLI Mode

# Interactive mode (default)
sage

# Run a specific task
sage run "Create a Python script that calculates fibonacci numbers"

# With custom configuration
sage --config-file my_config.json run "Analyze this codebase structure"

SDK Usage

use sage_sdk::{SageAgentSDK, RunOptions};

#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
    // Create SDK instance
    let sdk = SageAgentSDK::new()?
        .with_provider_and_model("openai", "gpt-4", None)?
        .with_working_directory("./my-project")
        .with_max_steps(10);

    // Execute a task
    let result = sdk.run("Create a README file for this project").await?;
    
    if result.is_success() {
        println!("βœ… Task completed successfully!");
        println!("πŸ“Š Used {} tokens in {} steps", 
                 result.statistics().total_tokens,
                 result.statistics().total_steps);
    }
    
    Ok(())
}

πŸ› οΈ Available Tools

Sage Agent comes with a comprehensive set of built-in tools:

  • bash: Execute shell commands and scripts
  • edit: Create and modify files with precise editing capabilities
  • json_edit: Specialized JSON file editing
  • codebase_retrieval: Intelligent code search and context retrieval
  • sequential_thinking: Step-by-step reasoning and planning
  • task_done: Mark tasks as completed
  • Task Management: view_tasklist, add_tasks, update_tasks, reorganize_tasklist

πŸ“– Examples

The examples/ directory contains various usage examples:

  • basic_usage.rs: Simple SDK usage patterns
  • custom_tool.rs: Creating custom tools
  • markdown_demo.rs: Terminal markdown rendering
  • ui_demo.rs: Interactive UI components

Run examples with:

cargo run --example basic_usage
cargo run --example markdown_demo
cargo run --example trajectory_demo

πŸ“Š Trajectory Recording

Sage Agent automatically records detailed execution trajectories for debugging and analysis:

# Automatically generate trajectory files
sage run "Debug authentication module"
# Saved to: trajectories/trajectory_20250612_220546.json

# Custom trajectory file
sage run "Optimize database queries" --trajectory-file optimization_debug.json

Trajectory files contain:

  • LLM Interactions: All messages, responses, and tool calls
  • Agent Steps: State transitions and decision points
  • Tool Usage: Which tools were called and their results
  • Metadata: Timestamps, token usage, and execution metrics

🎨 Advanced Features

Interactive Mode

In interactive mode, you can:

  • Enter any task description to execute
  • Use status to view agent information
  • Use help to get available commands
  • Use clear to clear the screen
  • Use exit or quit to end the session

Multi-Provider Support

# Use OpenAI
sage run "Create Python script" --provider openai --model gpt-4

# Use Anthropic
sage run "Code review" --provider anthropic --model claude-3-5-sonnet

# Use custom working directory
sage run "Add unit tests" --working-dir /path/to/project

Configuration Priority

  1. Command line arguments (highest priority)
  2. Configuration file values
  3. Environment variables
  4. Default values (lowest priority)

⚑ Performance Optimization

Best Practices

  • Concurrent Processing: Sage Agent uses Tokio async runtime for efficient concurrent operations
  • Memory Management: Rust's zero-cost abstractions ensure minimal runtime overhead
  • Caching Strategy: Intelligent caching of LLM responses and tool results for improved performance
  • Streaming Processing: Support for streaming LLM responses for better user experience

Configuration Tuning

{
  "model_parameters": {
    "temperature": 0.1,        // Lower randomness for more consistent results
    "max_tokens": 2000,        // Adjust based on task complexity
    "stream": true             // Enable streaming responses
  },
  "max_steps": 15,             // Limit max steps to control costs
  "timeout_seconds": 300       // Set reasonable timeout
}

Monitoring and Logging

# Enable verbose logging
RUST_LOG=sage_core=debug,sage_cli=info cargo run

# Monitor token usage
sage run "Task description" --show-stats

# Performance profiling
RUST_LOG=trace cargo run --release

πŸ”§ Development

Building

# Build all crates
cargo build

# Build with optimizations
cargo build --release

# Run tests
cargo test

# Run with logging
RUST_LOG=debug cargo run

Project Structure

sage-agent/
β”œβ”€β”€ crates/
β”‚   β”œβ”€β”€ sage-core/          # Core library
β”‚   β”‚   β”œβ”€β”€ src/agent/      # Agent execution logic
β”‚   β”‚   β”œβ”€β”€ src/llm/        # LLM client implementations
β”‚   β”‚   β”œβ”€β”€ src/tools/      # Tool system
β”‚   β”‚   └── src/ui/         # Terminal UI components
β”‚   β”œβ”€β”€ sage-cli/           # Command-line interface
β”‚   β”œβ”€β”€ sage-sdk/           # High-level SDK
β”‚   └── sage-tools/         # Built-in tools collection
β”œβ”€β”€ docs/                   # Comprehensive documentation
β”‚   β”œβ”€β”€ user-guide/         # User documentation
β”‚   β”œβ”€β”€ development/        # Developer documentation
β”‚   β”œβ”€β”€ architecture/       # System architecture docs
β”‚   β”œβ”€β”€ api/                # API reference
β”‚   └── planning/           # Project planning and roadmap
β”œβ”€β”€ examples/               # Usage examples
β”œβ”€β”€ trajectories/           # Execution trajectory files (gitignored)
β”œβ”€β”€ configs/                # Configuration templates and examples
└── Cargo.toml             # Workspace configuration

🎯 Use Cases

  • Code Generation: Create files, functions, and entire modules
  • Code Analysis: Understand and document existing codebases
  • Refactoring: Modernize and improve code structure
  • Testing: Generate and run test suites
  • Documentation: Create comprehensive project documentation
  • Automation: Automate repetitive development tasks

πŸ“ Configuration

Sage Agent supports flexible configuration through JSON files and environment variables:

{
  "providers": {
    "openai": {
      "api_key": "${OPENAI_API_KEY}",
      "base_url": "https://api.openai.com/v1"
    },
    "anthropic": {
      "api_key": "${ANTHROPIC_API_KEY}",
      "base_url": "https://api.anthropic.com"
    }
  },
  "default_provider": "openai",
  "model_parameters": {
    "model": "gpt-4",
    "temperature": 0.7,
    "max_tokens": 4000
  },
  "max_steps": 20,
  "working_directory": ".",
  "ui": {
    "enable_animations": true,
    "markdown_rendering": true
  },
  "trajectory": {
    "enabled": false,
    "directory": "trajectories",
    "auto_save": true,
    "save_interval_steps": 5
  }
}

πŸ“š Documentation

Comprehensive documentation is available in the docs/ directory:

Quick Links

πŸ”§ Troubleshooting

Common Issues

Import Errors:

# Try setting RUST_LOG
RUST_LOG=debug cargo run

API Key Issues:

# Verify API keys are set
echo $OPENAI_API_KEY
echo $ANTHROPIC_API_KEY

# Check configuration
sage --show-config

Permission Errors:

# Ensure proper permissions for file operations
chmod +x /path/to/your/project

Environment Variables

  • OPENAI_API_KEY - OpenAI API key
  • ANTHROPIC_API_KEY - Anthropic API key
  • GOOGLE_API_KEY - Google Gemini API key
  • OPENROUTER_API_KEY - OpenRouter API key

Development Guidelines

  • Follow Rust official code style guidelines
  • Add tests for new features
  • Update documentation as needed
  • Use appropriate type hints
  • Ensure all tests pass before committing

πŸ”„ Migration Insights

During our development process, we conducted extensive analysis comparing Rust vs TypeScript for AI agent development. Here are our key findings:

🎯 Performance Analysis

Aspect Rust TypeScript Winner
Concurrent Tool Execution ~120ms ~150ms Rust (+25%)
LLM API Calls 1-5 seconds 1-5 seconds Tie
Overall Agent Response 1.12-5.15s 1.15-5.15s Negligible difference

Key Insight: In AI agent scenarios, network I/O dominates performance, making language choice less critical.

πŸ› οΈ Development Experience

Factor Rust TypeScript
Ecosystem Richness Limited AI libraries Rich AI/LLM ecosystem
Development Speed Slower (compile times) Faster (hot reload)
Team Onboarding Steep learning curve Familiar to most devs
Debugging Complex (async + FFI) Straightforward
Deployment Complex (cross-platform) Simple (Node.js)

πŸ—οΈ Architecture Lessons

What Worked Well in Rust:

  • βœ… Concurrent tool execution: Excellent async/await patterns
  • βœ… Type safety: Compile-time guarantees prevented many bugs
  • βœ… Memory efficiency: Zero-cost abstractions
  • βœ… Clean architecture: Forced good design patterns

What Was Challenging:

  • ❌ UI Integration: Complex FFI for terminal UI
  • ❌ Ecosystem gaps: Missing AI-specific libraries
  • ❌ Build complexity: Cross-platform compilation issues
  • ❌ Development velocity: Slower iteration cycles

πŸ“Š Recommended Approach

For AI Agent development, we recommend:

  1. TypeScript/Node.js for rapid prototyping and rich AI ecosystem
  2. Rust for performance-critical components (if needed)
  3. Hybrid approach for the best of both worlds

πŸŽ“ Learning Value

This Rust implementation demonstrates:

  • Advanced concurrent programming patterns
  • Clean architecture in systems programming
  • Modern async Rust techniques
  • Terminal UI development with React + Ink

🀝 Contributing

Note: While this project is archived, we welcome discussions and learning exchanges! Please see our contributing guidelines for historical context on:

πŸ“„ License

This project is licensed under the MIT License - see the LICENSE file for details.

Note: This Rust implementation maintains compatibility with the MIT License of the original Trae Agent project.

πŸ™ Acknowledgments

  • Original Inspiration: This project is based on Trae Agent by ByteDance - a pioneering LLM-based agent for software engineering tasks
  • Partial Inspiration: Augment Code - Advanced AI code assistant and context engine, providing valuable reference for agent tool system design
  • Architecture Insights: Gemini CLI - Excellent reference for TypeScript-based AI agent architecture
  • Built with Rust and modern async patterns
  • Powered by leading LLM providers (Google、Anthropic、OpenAI, etc.)
  • Inspired by the open-source community's commitment to intelligent development automation
  • Special thanks to the Trae Agent contributors and maintainers for their foundational work
  • Appreciation to the Augment Code team for their innovative work in AI-assisted development
  • Gratitude to the Rust community for excellent async programming patterns and tools

πŸŽ“ Educational Value

This archived project serves as a comprehensive example of:

  • Modern Rust development with async/await patterns
  • Concurrent tool execution in AI agent systems
  • Terminal UI development with React + Ink integration
  • Clean architecture principles in systems programming
  • LLM integration patterns and best practices

Sage Agent - A learning journey in AI agent architecture. πŸ“šβœ¨

"Every archived project teaches us something valuable for the next one."

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 2

  •  
  •  

Languages